VMware

VMware ESXi 5.1 Update 1 Release Notes

VMware ESXi 5.1 Update1 | 25 APR 2013 | Build 1065491

Last updated: 8 JULY 2013

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of VMware ESXi contains the following enhancements:

  • Support for additional guest operating systems This release updates support for many guest operating systems.
    For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.

  • Resolved Issues This release also delivers a number of bug fixes that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 5.1

Features and known issues of ESXi 5.1 are described in the release notes for each release. To view release notes for earlier releases of ESXi 5.1, see the VMware vSphere 5.1 Release Notes.

Internationalization

VMware vSphere 5.1 Update 1 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese

Compatibility and Installation

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and previous versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. In addition, check this site for information about supported management and backup agents before installing ESXi or vCenter Server.

The vSphere Client and the vSphere Web Client are packaged with the vCenter Server and modules ZIP file. You can install one or both clients from the VMware vCenter™ Installer wizard.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 5.1.1 adds support for ESXi 5.1 Update 1 and vCenter Server 5.1 Update 1 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

Hardware Compatibility for ESXi

To determine which processors, storage devices, SAN arrays, and I/O devices are compatible with vSphere 5.1 Update 1, use the ESXi 5.1 Update 1 information in the VMware Compatibility Guide.

The list of supported processors is expanded for this release. To determine which processors are compatible with this release, use the VMware Compatibility Guide.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with ESXi 5.1 Update 1 , use the ESXi 5.1 Update 1 information in the VMware Compatibility Guide.

Beginning with vSphere 5.1, support level changes for older guest operating systems have been introduced. For descriptions of each support level, see Knowledge Base article 2015161. The VMware Compatibility Guide provides detailed support information for all operating system releases and VMware product releases.

The following guest operating system releases that are no longer supported by their respective operating system vendors are deprecated. Future vSphere releases will not support these guest operating systems, although vSphere 5.1 Update 1 does support them.

  • Windows NT
  • All 16-bit Windows and DOS releases (Windows 98, Windows 95, Windows 3.1)
  • Debian 4.0 and 5.0
  • Red Hat Enterprise Linux 2.1
  • SUSE Linux Enterprise 8
  • SUSE Linux Enterprise 9 prior to SP4
  • SUSE Linux Enterprise 10 prior to SP3
  • SUSE Linux Enterprise 11 Prior to SP1
  • Ubuntu releases 8.04, 8.10, 9.04, 9.10 and 10.10
  • All releases of Novell Netware
  • All releases of IBM OS/2

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 5.1 Update 1 . Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are no longer supported. To use such virtual machines on ESXi 5.1 Update 1 , upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for step-by-step guidance on installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. In particular, read the following:

Migrating Third-Party Solutions

You cannot directly migrate third-party solutions installed on an ESX or ESXi host as part of a host upgrade. Architectural changes between ESXi 5.0 and ESXi 5.1 result in the loss of third-party components and possible system instability. To accomplish such migrations, you can create a custom ISO file with Image Builder. For information about upgrading with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 5.1 Update 1 supports only CPUs with LAHF and SAHF CPU instruction sets. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5.1 Update 1. If your host hardware is not compatible, a purple screen appears with an incompatibility information message, and you cannot install or upgrade to vSphere 5.1 Update 1.

Upgrades for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

ESXi 5.1 Update 1 offers the following tools for upgrading ESX/ESXi hosts:

  • Upgrade interactively using an ESXi installer ISO image on CD-ROM, DVD, or USB flash drive. You can run the ESXi 5.1 Update 1 installer from a CD-ROM, DVD, or USB flash drive to do an interactive upgrade. This method is appropriate for a small number of hosts.
  • Perform a scripted upgrade. You can upgrade or migrate from version 4.x ESX/ESXi hosts, ESXi 5.0.x, and ESXi 5.1 hosts to ESXi 5.1 Update 1 by invoking an update script, which provides an efficient, unattended upgrade. Scripted upgrades also provide an efficient way to deploy multiple hosts. You can use a script to upgrade ESXi from a CD-ROM or DVD drive, or by PXE-booting the installer.

  • vSphere Auto Deploy. If your ESXi 5.x host was deployed using vSphere Auto Deploy, you can use Auto Deploy to reprovision the host by rebooting it with a new image profile that contains the ESXi upgrade.

  • esxcli. You can update and apply patches to ESXi 5.1.x hosts by using the esxcli command-line utility,this can be done either from a download depot on vmware.com or from a downloaded ZIP file of a depot that is prepared by a VMware partner. You cannot use esxcli to upgrade ESX or ESXI hosts to version 5.1.x from ESX/ESXI versions earlier than version 5.1.

Supported Upgrade Paths for Upgrade to ESXi 5.1 Update 1 :

Upgrade Deliverables

Supported Upgrade Tools

Supported Upgrade Paths to ESXi 5.1 Update 1

ESX 4.0:
Includes
ESX 4.0 Update 1
ESX4.0 Update 2

ESX4.0 Update 3
ESX 4.0 Update 4

ESXi 4.0:
Includes
ESXi 4.0 Update 1
ESXi 4.0 Update 2

ESXi 4.0 Update 3
ESXi 4.0 Update 4

ESX 4.1:
Includes
ESX 4.1 Update 1
ESX 4.1 Update 2

ESX 4.1 Update 3

 

ESXi 4.1:
Includes
ESXi 4.1 Update 1

ESXi 4.1 Update 2
ESXi 4.1 Update 3

ESXi 5.0:
Includes
ESXi 5.0 Update 1

ESXi 5.0 Update 2

ESXi 5.1

VMware-VMvisor-Installer-5.1.0.update01-1065491.x86_64.iso

 

  • VMware vCenter Update Manager
  • CD Upgrade
  • Scripted Upgrade

Yes

Yes

Yes

Yes

Yes

Yes

update-from-esxi5.1-5.1_update01.zip
  • VMware vCenter Update Manager
  • ESXCLI
  • VMware vSphere CLI

No

No

No

No

No

Yes

Using patch definitions downloaded from VMware portal (online)

VMware vCenter Update Manager with patch baseline

No

No

No

No

No

Yes

Open Source Components for VMware vSphere 5.1 Update 1

The copyright statements and licenses applicable to the open source software components distributed in vSphere 5.1 Update 1 are available at http://www.vmware.com/download/open_source.html. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent generally available release of vSphere.

Product Support Notices

  • vSphere Client. In vSphere 5.1, all new vSphere features are available only through the vSphere Web Client. The traditional vSphere Client will continue to operate, supporting the same feature set as vSphere 5.0, but not exposing any of the new features in vSphere 5.1 .

    vSphere 5.1 and its subsequent update and patch releases are the last releases to include the traditional vSphere Client. Future major releases of VMware vSphere will include only the vSphere Web Client.

    For vSphere 5.1, bug fixes for the traditional vSphere Client are limited to security or critical issues. Critical bugs are deviations from specified product functionality that cause data corruption, data loss, system crash, or significant customer application down time where no workaround is available that can be implemented.

  • VMware Toolbox. vSphere 5.1 is the last release to include support for the VMware Tools graphical user interface, VMware Toolbox. VMware will continue to update and support the Toolbox command-line interface (CLI) to perform all VMware Tools functions.

  • VMI Paravirtualization. vSphere 4.1 was the last release to support the VMI guest operating system paravirtualization interface. For information about migrating virtual machines that are enabled for VMI so that they can run on later vSphere releases, see Knowledge Base article 1013842.

  • Windows Guest Operating System Customization. vSphere 5.1 is the last release to support customization for Windows 2000 guest operating systems. VMware will continue to support customization for newer versions of Windows guests.

  • VMCI Sockets. Guest-to-guest communications (virtual machine to virtual machine) are deprecated in the vSphere 5.1 release. This functionality will be removed in the next major release. VMware will continue support for host to guest communications.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi510-Update01 contains the following individual bulletins:

ESXi510-201304201-UG: Updates ESXi 5.1 esx-base vib
ESXi510-201304202-UG: Updates ESXi 5.1 tools-light vib
ESXi510-201304203-UG: Updates ESXi 5.1 net-ixgbe vib
ESXi510-201304204-UG: Updates ESXi 5.1 ipmi-ipmi-si-drv vib
ESXi510-201304205-UG: Updates ESXi 5.1 net-tg3 vib
ESXi510-201304206-UG: Updates ESXi 5.1 misc-drivers vibs
ESXi510-201304207-UG: Updates ESXi 5.1 net-e1000e vib
ESXi510-201304208-UG: Updates ESXi 5.1 net-igb vib
ESXi510-201304209-UG: Updates ESXi 5.1 scsi-megaraid-sas vib
ESXi510-201304210-UG: Updates ESXi 5.1 net-bnx2 vib


Patch Release ESXi510-Update01 Security-only contains the following individual bulletins:

ESXi510-201304101-SG: Updates ESXi 5.1 esx-base vib
ESXi510-201304102-SG: Updates ESXi 5.1 tools-light vib
ESXi510-201304103-SG: Updates ESXi 5.1 net-bnx2x vib
ESXi510-201304104-SG: Updates ESXi 5.1 esx-xserver vib


Patch Release ESXi510-Update01 contains the following image profiles:

ESXi-5.1.0-20130402001-standard
ESXi-5.1.0-20130402001-no-tools

Patch Release ESXi510-Update01 Security-only contains the following image profiles:

ESXi-5.1.0-20130401001s-standard
ESXi-5.1.0-20130401001s-no-tools


For information on patch and update classification, see KB 2014447.

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

CIM and API Issues

  • ESXi host UUID field byte orders were not reported as per SMBIOS version 2.6
    SMBIOS version 2.6 changed the byte order of first three fields and this is now adopted by ESXi 5.1 Update 1.
    On some systems the SMBIOS UUID reported by the ESXi 5.1 Update 1 host might be different from the SMBIOS UUID reported by previous versions of ESXi. The byte order of the first 3 fields is swapped. If there is any application that depends on UUID, it has to be modified to adopt SMBIOS version 2.6 or later.
  • ESXi 5.x host appears disconnected in vCenter Server and logs the ramdisk (root) is full message in the vpxa.log file
    If Simple Network Management Protocol (SNMP) is unable to handle the number of SNMP trap files (.trp) in the /var/spool/snmp folder of ESXi, the host might appear as disconnected in vCenter Server. You might not be able to perform any task on the host.
    The vpxa.log contains several entries similar to the following:
    WARNING: VisorFSObj: 1954: Cannot create file
    /var/run/vmware/f4a0dbedb2e0fd30b80f90123fbe40f8.lck for process vpxa because the inode table of its ramdisk (root) is full.
    WARNING: VisorFSObj: 1954: Cannot create file
    /var/run/vmware/watchdog-vpxa.PID for process sh because the inode table of its ramdisk (root) is full.


    This issue is resolved in this release.
  • vCenter Server or vSphere Client not able to monitor the Host Hardware RAID Controller status
    If you have installed third-party Host Hardware RAID Controller (HHRC) CIM provider such as LSI CIM provider, the sfcb-hhrc process fails when there is an error in the wrapped HHRC CIM provider.

    This release resolves the issue by enhancing the error handling capability and robustness of the HHRCWrapperProvider.

Guest Operating System Issues

  • When you create a virtual machine using Hardware version 9, the Mac OS X 10.8 64-bit option is not available
    When you create a virtual machine using Hardware version 9 in vSphere 5.1, the Mac OS X 10.8 64-bit operating system option does not appear in the guest operating system drop-down menu.

    This issue is resolved with this release if you use the NGC Client to connect to vSphere 5.1 Update 1.
  • The time stamp counter (TSC) is not calibrated correctly for Solaris 10 virtual machines
    This issue is resolved in this release.
    To rectify the TSC calibration within the tolerance of Network Time Protocol (NTP), the Guest Timer Calibration support is added for Solaris 10 guest operating systems.
  • On an ESXi 5.1 host, virtual machines might become unresponsive with busy vCPUs
    On an ESXi 5.1 host, a virtual machine running on virtual machine hardware version 7 might become unresponsive with busy vCPUs. This issue affects virtual machines on which guest operating systems use Advanced Programmable Interrupt Controller (APIC) logical destination mode. Virtual machines on which guest operating systems use physical destination mode are not affected.

    This issue has been resolved in this release.

  • Attempts to create a temporary file on the guest operating might result in a GuestPermissionDeniedFault exception
    When you reconfigure a powered-on virtual machine to add a SCSI controller and attempt to create a temporary file on the guest operating system, the operation might fail as CreateTemporaryFile InGuest() function might result in a GuestPermissionDeniedFault exception.

    This issue is resolved in this release.

Miscellaneous Issues

  • Use of the invoke-vmscript command displays an error
    When you use the invoke-vmscript PowerCLI command scripts on a virtual machine, the script fails with the following error message:

    The guest operations agent could not be contacted.

    This issue is resolved with this release.
  • Updated ESXi hosts might fail with a purple diagnostic screen when you attempt to plug in or unplug a keyboard or mouse through a USB port
    When you attempt to plug in or unplug a keyboard or a mouse through the USB port, the ESXi host might fail with the following error message:
    PCPU## locked up. Failed to ack TLB invalidate.

    This issue is resolved in this release.
  • Attempt to join an ESXi host to a domain using vSphere Authentication Proxy service (CAM service) might fail for the first time due to delay in DC replication
    The automated process to add an ESXi host for the first time using vSphere Authentication Proxy service (CAM service) might fail with an error message similar to the following:

    Cannot complete login due to an incorrect user name or password

    This issue occurs when the Authentication Proxy service creates the account on one Domain Controller and the ESXi host communicates with another Domain Controller resulting in a Domain Controller replication delay. Thus the ESXi host fails to connect to the Domain Controller (DC) with the newly created account.
    This issue does not occur when the ESXi host is added directly to the Active Directory domain without using the Authentication Proxy service.

    This issue is resolved in this release.
  • Component-based logging and advanced configurations added to hostd log level
    To avoid difficulties in getting appropriate logs during an issue, this release introduces component-based logging by dividing the loggers into different groups and prefixing them. Also, new advanced configuration allows you to change hostd log's log level without restarting.

    This issue is resolved in this release
  • Virtual machines running 64-bit Windows 2003 R2 guest operating system on an ESXi host with vShield Endpoint might fail with a blue diagnostic screen
    Under certain conditions, an erroneous API call in the vShield Endpoint driver vsepflt.sys might cause virtual machines running 64-bit Windows 2003 R2 guest operating system to fail with a blue diagnostic screen.

    This issue is resolved in this release.
  • VMware Ondisk Metadata Analyser (VOMA) fails with a segmentation fault
  • The VMware Ondisk Metadata Analyser(VOMA)fails with a segmentation fault because of VMFSCK module.
    This issue is resolved in this release.
  • vSphere network core dump does not collect complete data
    vSphere network core dump does not collect complete data if the disk dump fails to collect some data due to insufficient dump slot size.

    This issue is resolved in this release. Any failures due to disk dump slot size no longer affect network core dump.

  • ESXi hosts might fail if hostd-worker thread consumes 100% CPU resources
    Under sufficiently high workload on the ESXi host, hostd-worker thread might get stuck consuming 100% CPU while fetching the virtual machine screenshot file for vCloud Director UI. This issue might result in the failure of the ESXi host.

    This issue is resolved in this release.

Networking Issues

  • Long running vMotion operations might result in unicast flooding
    When using the multiple-NIC vMotion feature with vSphere 5, if vMotion operations continue for a long time, unicast flooding is observed on all interfaces of the physical switch. If the vMotion takes longer than the ageing time that is set for MAC address tables, the source and destination host start receiving high amounts of network traffic.

    This issue is resolved in this release.

  • ESXi host might display a compliance failure error message after applying Host Profile successfully to the host
    When you successfully apply Host Profile to an ESXi host, the host become noncompliant and might display an error message similar to the following:

    IPv4 routes did not match

    This issue occurs when two or more VMkernel port groups use the same subnet.

    This issue is resolved in this release.

  • Virtual machine with the VMXNET 3 vNIC network adapter might result in error messages being logged to the hostd log file
    Virtual machine with the VMXNET 3 vNIC network adapter might result in error messages being logged to the hostd log file:

    2012-06-12T07:09:03.351Z [5E342B90 error 'NetworkProvider'] Unknown port type [0]: convert to UNKNOWN.
    2012-06-12T07:09:03.351Z [5E342B90 error 'NetworkProvider'] Unknown port type [0]: convert to UNKNOWN.
    2012-06-12T07:09:03.351Z [5E342B90 error 'NetworkProvider'] Unknown port type [0]: convert to UNKNOWN.


    This issue occurs when the port type is unset. This is a normal behavior. Hence, you can ignore this message.

    This issue is resolved in this release.

  • ESXi host stops responding with a purple diagnostic screen during arpresolve
    The ESXi host might stop responding during arpresolve and display a purple diagnostic screen with the following messages:

    2012-08-30T14:35:11.247Z cpu50:8242)@BlueScreen: #PF Exception 14 in world 8242:idle50 IP 0x4180280056ad addr 0x1
    2012-08-30T14:35:11.248Z cpu50:8242)Code start: 0x418027800000 VMK uptime: 7:14:19:29.193
    2012-08-30T14:35:11.249Z cpu50:8242)0x412240c87868:[0x4180280056ad]arpresolve@ # +0xd4 stack: 0x412240c878a8
    2012-08-30T14:35:11.249Z cpu50:8242)0x412240c878e8:[0x418027ff69ba]ether_output@ # +0x8d stack: 0x412240c87938
    2012-08-30T14:35:11.250Z cpu50:8242)0x412240c87a28:[0x4180280065ad]arpintr@ # +0xa9c stack: 0x41003eacd000
    2012-08-30T14:35:11.251Z cpu50:8242)0x412240c87a68:[0x418027ff6f76]ether_demux@ # +0x1a1 stack: 0x41003eacd000
    2012-08-30T14:35:11.252Z cpu50:8242)0x412240c87a98:[0x418027ff72c4]ether_input@ # +0x283 stack: 0x412240c87b50
    2012-08-30T14:35:11.253Z cpu50:8242)0x412240c87b18:[0x418027fd338f]TcpipRx@ # +0x1de stack: 0x2
    2012-08-30T14:35:11.254Z cpu50:8242)0x412240c87b98:[0x418027fd2d9b]TcpipDispatch@ # +0x1c6 stack: 0x410008ce6000
    2012-08-30T14:35:11.255Z cpu50:8242)0x412240c87c48:[0x4180278ed76e]WorldletProcessQueue@vmkernel#nover+0x3c5 stack: 0x0
    2012-08-30T14:35:11.255Z cpu50:8242)0x412240c87c88:[0x4180278edc79]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x2
    2012-08-30T14:35:11.256Z cpu50:8242)0x412240c87ce8:[0x41802781847c]BHCallHandlers@vmkernel#nover+0xbb stack: 0x100410000000000
    2012-08-30T14:35:11.257Z cpu50:8242)0x412240c87d28:[0x41802781896b]BH_Check@vmkernel#nover+0xde stack: 0x4a8a29522e753
    2012-08-30T14:35:11.258Z cpu50:8242)0x412240c87e58:[0x4180279efb41]CpuSchedIdleLoopInt@vmkernel#nover+0x84 stack: 0x412240c87e98
    2012-08-30T14:35:11.259Z cpu50:8242)0x412240c87e68:[0x4180279f75f6]CpuSched_IdleLoop@vmkernel#nover+0x15 stack: 0x62
    2012-08-30T14:35:11.260Z cpu50:8242)0x412240c87e98:[0x41802784631e]Init_SlaveIdle@vmkernel#nover+0x13d stack: 0x0
    2012-08-30T14:35:11.261Z cpu50:8242)0x412240c87fe8:[0x418027b06479]SMPSlaveIdle@vmkernel#nover+0x310 stack: 0x0
    2012-08-30T14:35:11.272Z cpu50:8242)base fs=0x0 gs=0x41804c800000 Kgs=0x0

    This issue is resolved in this release.
  • Solaris 11 virtual machines with VMXNET3 network adapter might write messages to /var/adm/messages in every 30 seconds
    Solaris 11 virtual machines with VMXNET3 network adapter might write messages to the /var/adm/messages. The error message similar to the following might be displayed in the system log:
    vmxnet3s: [ID 654879 kern.notice] vmxnet3s:1: getcapab(0x200000) -> no
    This issue is repeated in every 30 seconds.
    This issue is resolved in this release.
  • An ESXi host might stop responding and display a purple diagnostic screen due to an error during cleaning of the transit ring
    An ESXi host might stop responding and display a purple diagnostic screen with messages similar to the following:

    @BlueScreen: #PF Exception 14 in world 4891:vmm0:HQINTSQ IP 0x41800655ea98 addr 0xcc
    41:11:55:21.277 cpu15:4891)Code start: 0x418006000000 VMK uptime: 41:11:55:21.277
    41:11:55:21.278 cpu15:4891)0x417f818df948:[0x41800655ea98]e1000_clean_tx_irq@esx:nover+0x9f stack: 0x4100b5610080
    41:11:55:21.278 cpu15:4891)0x417f818df998:[0x418006560b9a]e1000_poll@esx:nover+0x18d stack: 0x417f818dfab4
    41:11:55:21.279 cpu15:4891)0x417f818dfa18:[0x41800645013a]napi_poll@esx:nover+0x10d stack: 0x417fc68857b8
    41:11:55:21.280 cpu15:4891)0x417f818dfae8:[0x4180060d699b]WorldletBHHandler@vmkernel:nover+0x442 stack: 0x417fc67bf7e0
    41:11:55:21.280 cpu15:4891)0x417f818dfb48:[0x4180060062d6]BHCallHandlers@vmkernel:nover+0xc5 stack: 0x100410006c38000
    41:11:55:21.281 cpu15:4891)0x417f818dfb88:[0x4180060065d0]BH_Check@vmkernel:nover+0xcf stack: 0x417f818dfc18
    41:11:55:21.281 cpu15:4891)0x417f818dfc98:[0x4180061cc7c5]CpuSchedIdleLoopInt@vmkernel:nover+0x6c stack: 0x410004822dc0
    41:11:55:21.282 cpu15:4891)0x417f818dfe68:[0x4180061d180e]CpuSchedDispatch@vmkernel:nover+0x16e1 stack: 0x0
    41:11:55:21.282 cpu15:4891)0x417f818dfed8:[0x4180061d225a]CpuSchedWait@vmkernel:nover+0x20d stack: 0x410006108b98
    41:11:55:21.283 cpu15:4891)0x417f818dff28:[0x4180061d247a]CpuSched_VcpuHalt@vmkernel:nover+0x159 stack: 0x4100a24416ac
    41:11:55:21.284 cpu15:4891)0x417f818dff98:[0x4180060b181f]VMMVMKCall_Call@vmkernel:nover+0x2ba stack: 0x417f818dfff0
    41:11:55:21.284 cpu15:4891)0x417f818dffe8:[0x418006098d19]VMKVMMEnterVMKernel@vmkernel:nover+0x10c stack: 0x0
    41:11:55:21.285 cpu15:4891)0xfffffffffc058698:[0xfffffffffc21d008]__vmk_versionInfo_str@esx:nover+0xf59cc9f7 stack: 0x0
    41:11:55:21.297 cpu15:4891)FSbase:0x0 GSbase:0x418043c00000 kernelGSbase:0x0

    This error occurs if during cleaning of the transmit ring, the CPU is sent to perform some other tasks and if another CPU cleans the ring in the meantime, then the first CPU erroneously cleans the transit ring again and ends up de-referencing a null skb.

    This issue is resolved with this release.
  • Network connectivity on IPv6 virtual machines not working with VMXNET3
    When more than 32 IPv6 addresses are configured on a VMXNET3 interface, the unicast and multicast connectivity to some of those addresses are lost.

    This issue is resolved with this release.
  • Updates the tg3 driver to version 3.123b.v50.1
    The tg3 inbox driver version with ESXi 5.1 Update 1 is 3.123b.v50.1.
  • Host profile might fail to apply MTU value on the vSwitches of the destination host
    When you apply a host profile that modifies only the MTU value for a standard vSwitch, the new MTU configuration is not applied on vSwitches of the new destination host.
    This issue is resolved in this release.
  • Virtual machine might lose network connectivity from external environment after vMotion with vNetwork Distributed Switch environment
    A virtual machine might lose network connectivity from the external environment after vMotion with vNetwork Distributed Switch environment.
    This issue might occur when all of the following situations occur simultaneously:
    • The virtual machine network port group is configured on vNetwork Distributed Switch.
    • The virtual machine is configured with VMXNET2 (enhanced) or Flexible(VMXNET) NIC.
    • The configuration with virtual machine network port group security setting is set to Reject MAC Address Change.

    This issue is resolved in this release.

  • ESX/ESXi host with Broadcom bnx2x Async driver version 1.61.15.v50.1 might fail with a purple diagnostic screen
    ESX/ESXi host with Broadcom bnx2x Async driver version 1.61.15.v50.1 might fail with a purple diagnostic screen when the bnx2x Async driver sets the value of TCP segmentation offload (TSO) maximum segment size (MSS) in VMkernel to zero.

    This issue is resolved in this release.

  • Physical NICs set to Auto-negotiate cannot be changed to Fixed by using host profiles if the same speed and duplex settings are present
    Host profile compliance check is performed against the speed and duplex of a physical NIC. If the speed and duplex of a physical NIC of a ESXi host matches that of the host profile, the ESXi host is shown as compliant, even if the physical NIC is set to Auto-negotiate and the host profile is set to Fixed. Also, physical NICs set to Auto-negotiate cannot be changed to Fixed by using host profiles if the speed and duplex settings of the ESXi host and host profile is the same.

    This issue is resolved in this release.
  • ESXi 5.1 host with Beacon Probing enabled might fail upon reboot and displays a purple screen
    If you reboot an ESXi 5.1 host with Beacon Probing enabled, the host might fail with a purple screen and display error messages similar to the following:
    2012-09-12T13:32:10.964Z cpu6:4578)@BlueScreen: #PF Exception 14 in world 4578:helper34-0 IP 0x41803c0dbf43 addr 0x200 2012-09-12T13:32:10.964Z cpu6:4578)Code start: 0x41803ba00000 VMK uptime: 0:00:10:57.178 2012-09-12T13:32:10.965Z cpu6:4578)0x41220789ba70:[0x41803c0dbf43]NCPGetUplinkBeaconState@ # +0x1e stack: 0x410003000000 2012-09-12T13:32:10.966Z cpu6:4578)0x41220789bf40:[0x41803c0dcc5d]NCPHealthChkTicketCB@ # +0x258 stack: 0x0 2012-09-12T13:32:10.967Z cpu6:4578)0x41220789bf60:[0x41803c0a991d]NHCTicketEventCB@com.vmware.net.healthchk#1.0.0.0+0x3c stack: 0x0 2012-09-12T13:32:10.968Z cpu6:4578)0x41220789bff0:[0x41803ba483df]helpFunc@vmkernel#nover+0x52e stack: 0x0

    This issue is resolved in this release.

  • Intel I350 Gigabit Network Adapter generates incorrect MAC address for an ESXi host
    On an ESXi host, incorrect or duplicate MAC addresses are assigned on the embedded Intel I350 Gigabit Network Adapter, resulting in a high package loss.

    This issue is resolved in this release.

  • Attempts to enable flow control on the Intel 82599EB Gigabit Ethernet Controller fail
    When you attempt to enable flow control on the Intel 82599EB Gigabit Ethernet controller, the ixgbe driver incorrectly sets the flow control mode to priority-based flow control in which the flow control is always disabled. As a result, the error message Cannot set device pause parameters: Invalid argument appears when you attempt to enable flow control.

    This issue is resolved in this release.

Security Issues

  • Update to libxml2 library addresses multiple security issues
    The ESXi userworld libxml2 library has been updated to resolve multiple security issues. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2011-3102, CVE-2012-2807 and CVE-2012-5134 to these issues.

Server Configuration Issues

  • Component-based logging might not be enabled in hostd log file
    Component-based logging might not be enabled in the hostd log file.

    This issue is resolved in this release by changing the logger names for the hostd file. You might encounter some changes in the log messages in the hostd log file.
  • ESXi host stops responding and displays a purple diagnostic screen
    ESXi host stops responding and displays a purple diagnostic screen due to out-of-order lock release in the IPMI driver. An error message similar to the following is displayed with an ipmi_request_settime and i_ipmi_request backtrace:

    Failed to ack TLB invalidate

    This issue is resolved with this release.
  • ESXi host might stop responding when it services two level-triggered interrupts simultaneously
    When level-triggered interrupts are invoked for the same device or driver in two different CPUs of an ESXi host and are run simultaneously, the ESXi host stops responding and displays a purple diagnostic screen with a message similar to either of the following:
    • mpt2sas_base_get_smid_scsiio
    • Failed at vmkdrivers/src_9/vmklinux_9/vmware/linux_scsi.c:2221 -- NOT REACHED

    This issue is resolved in this release by improving the interrupt handling mechanism for level-triggered interrupts.

  • vCenter Server and ESXi host displays Dell H310 series card names incorrectly
    The Dell H310 series PCI IDs were not added to the megaraid_sas driver. As a result, vCenter Server and ESXi host display Dell H310 series card names incorrectly.

    This issue is resolved in this release.

  • Attempts to apply host profile might fail with an error message indicating that the CIM indication subscription cannot be deleted
    CIM indication subscriptions are stored in the repository and have a copy in the memory. If you attempt to apply host profile when the repository and memory are not synchronized, the delete operation might fail. Error message similar to the following is written to var/log/syslog.log:
    Error deleting indication subscription. The requested object could not be found

    This issue is resolved in this release.
  • Red Hat Enterprise Linux 4.8 32-bit virtual machine might show higher load average on ESXi 5.x compared to ESX 4.0
  • A virtual machine running RedHat Enterprise Linux 4.8 32-bit guest with a workload that is mostly idle with intermittent, simultaneous wakeup of multiple tasks, might show a higher load average on ESX 5.x as compared to ESX 4.0.
    This issue is resolved in this release.
  • Hardware Status tab might stop displaying host health status
    On an ESXi 5.1 host, Small-Footprint CIM Broker daemon (sfcbd) might fail frequently and display CIM errors. As a result, Hardware Status tab might stop displaying host health status and syslog.log might have error message similar to the following:
    Timeout (or other socket error) sending request to provider.

    This issue is resolved in this release.

  • Misleading error messages in the hostd log when you assign VMware vSphere Hypervisor vRAM license to an ESXi host with pRAM greater than 32GB
    When you attempt to assign VMware vSphere Hypervisor Edition vRAM license key to an ESXi host having physical RAM size of more than 32GB, the ESXi host might log false error messages similar to the following to /var/log/vmware/hostd:
    2012-08-08T16:39:18.593Z [2AA78B90 error 'Default' opID=HB-host-84@121-9c61c8e-a8] Unable to parse MaxRam value:
    2012-08-08T16:39:18.594Z [2AA78B90 error 'Default' opID=HB-host-84@121-9c61c8e-a8] Unable to parse MaxRamPerCpu value:
    2012-08-08T16:39:18.594Z [2AA78B90 error 'Default' opID=HB-host-84@121-9c61c8e-a8] Unable to parse MinRamPerCpu value:
    2012-08-08T16:39:18.594Z [2AA78B90 error 'Default' opID=HB-host-84@121-9c61c8e-a8] Unable to parse vram value:

    This issue is resolved in this release.

Storage Issues

  • Adding a new hard disk to a virtual machine that resides on a Storage DRS enabled datastore cluster might result in Insufficient Disk Space error
    When you add a virtual disk to a virtual machine that resides on a Storage DRS enabled datastore and if the size of the virutal disk is greater than the free space available on the datastore, SDRS might migrate another virtual machine out of the datastore to allow sufficient free space for adding the virtual disk. Storage vMotion operation completes but the subsequent addition of virtual disk to the virtual machine might fail and an error message similar to the following might be displayed:

    Insufficient Disk Space

    This issue might rarely occur when disk allocations or disk deallocations overlap and are performed on a single host.
    This issue might also occur when the storage does not respond quickly during a rapid series of disk allocations or disk deallocations as this can result in overlapping rescans too.

    This issue does not occur after you retry after 30 seconds.
  • ESXi host might fail with a purple diagnostic screen if the SCSI command data direction is uninitialized during allocation
    If a third party module creates a custom Small Computer System Interface (SCSI) command structure without using VMKernel APIs and runs this SCSI command structure, the ESXi host might fail with a purple diagnostic screen.
    This issue occurs if the SCSI command requires a Direct Memory Access (DMA) transfer and the data direction in the SCSI command structure is left uninitialized.
    This issue is resolved in this release.
  • ScsiDeviceIO related error message might be logged during device discovery
    During device discovery, if an optional SCSI command fails with certain condition, ESXi 5.1 host might log failed optional SCSI commands. An error message similar to the following might be written to vmkernel.log:
    2011-10-03T15:16:21.785Z cpu3:2051)ScsiDeviceIO: 2316: Cmd(0x412400754280) 0x12, CmdSN 0x60b to dev "naa.600508e000000000f795beaae1d28903" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

    This issue is resolved in this release.
  • Attempts to create a diagnostic partition on a blank disk or GPT partitioned disk might fail
    Attempts to create a diagnostic partition using the vSphere Client might fail if you attempt to create a diagnostic partition on a blank disk (a disk with no partition table) or on a GPT partitioned disk with free space available at the end.
    An error message similar to the following is displayed when you attempt creating a diagnostic partition on a blank disk:

    Partition format unknown is not supported

    An error message similar to the following is displayed when you attempt creating a diagnostic partition on a GPT partitioned disk with free space available at the end:

    error occurred during host configuration.

    This issue is resolved in this release.
  • Unable to delete files from the VMFS directory after one or more files are moved to it
    After moving one or more files into a directory, an attempt to delete the directory or any of the files in directory might fail. In such a case, the vmkernel.log file contains entries similar to the following :
    2012-06-25T21:03:29.940Z cpu4:373534)WARNING: Fil3: 13291: newLength 85120 but zla 2
    2012-06-25T21:03:29.940Z cpu4:373534)Fil3: 7752: Corrupt file length 85120, on FD <281, 106>, not truncating


    This issue is resolved in this release.

  • An ESXi host might display a purple diagnostic screen when you run ioctl IOCTLCMD_VMFS_QUERY_RAW_DISK from disklib
    ESXi host stops responding with a purple diagnostic screen, when a pseudo LUN of class SCSI_CLASS_RAID(0xc) with same naa.id as used by a physical Raw Device Mapping (pRDM) in a virtual machine is presented as the new LUN and a rescan is performed.

    This issue is resolved in this release.
  • Unable to identify the virtual machine issuing READ10/WRITE10 warning messages
    With READ10/WRITE 10 warnings it is difficult to determine the virtual machine issuing these commands as the logs display only the VSCSCI handle ID. The world ID that is part of the VMkernel logs does not always correspond to the virtual machine issuing the command.

    This release resolves the issue by enhancing the READ10/WRITE10 warnings to also print the virtual machine name.
  • When you create a VMFS3 datastore with non-default start sector or on a Logical Unit Number with very small partitions, the QueryVmfsDatastoreExpandOptions and QueryVmfsDatastoreExtendOptions might fail
    The ESXi host (hostd) incorrectly handles the DatastoreSystem managed object methods calls to QueryVmfsDatastoreExpandOptions and QueryVmfsDatastoreExtendOptions. This issue is observed on the following datastores:
    • VMFS3 datastore created with a start sector greater than 1MB and less than the value given by the formula (head count * sector count * sector size).
    • VMFS3 datastore created on a logical unit number with a partition whose size is smaller than the value given by the formula (head count * sector count * sector size).

    This issue is resolved in this release.
  • ESXi host might fail due to a synchronization issue between pending copy-on-write (COW) Async I/Os and closure of sparse redo log file
    An ESXi host might fail with a purple diagnostic screen and reports a PF Exception error with backtrace similar to:

    @BlueScreen: #PF Exception 14 in world 137633099:vmm0:TSTSPSR IP 0x41801e20b01f addr 0x0
    cpu4:137633099)Code start: 0x41801dc00000 VMK uptime: 464:10:04:03.850
    cpu4:137633099)0x417f86a5f6b8:[0x41801e20b01f]COWCopyWriteDone@esx:nover+0x122 stack: 0x41027f520dc0
    cpu4:137633099)0x417f86a5f6e8:[0x41801dc0838a]AsyncPopCallbackFrameInt@vmkernel:nover+0x81 stack: 0x0
    cpu4:137633099)0x417f86a5f968:[0x41801e20b2c2]COWAsyncDataDone@esx:nover+0x125 stack: 0x41027f5aebd8
    cpu4:137633099)0x417f86a5f998:[0x41801dc0838a]AsyncPopCallbackFrameInt@vmkernel:nover+0x81 stack: 0x41027f556ec0
    cpu4:137633099)0x417f86a5f9c8:[0x41801dc085e9]AsyncCompleteOneIO@vmkernel:nover+0xac stack: 0x41027fd9c540
    cpu4:137633099)0x417f86a5f9f8:[0x41801dc0838a]AsyncPopCallbackFrameInt@vmkernel:nover+0x81 stack: 0x41027f5ae440
    cpu4:137633099)0x417f86a5fa18:[0x41801de240b9]FS_IOAccessDone@vmkernel:nover+0x80 stack: 0x41027f556858
    cpu4:137633099)0x417f86a5fa48:[0x41801dc0838a]AsyncPopCallbackFrameInt@vmkernel:nover+0x81 stack: 0x41027f59a2c0
    cpu4:137633099)0x417f86a5fa78:[0x41801dc085e9]AsyncCompleteOneIO@vmkernel:nover+0xac stack: 0x41027f480040
    cpu4:137633099)0x417f86a5faa8:[0x41801dc0838a]AsyncPopCallbackFrameInt@vmkernel:nover+0x81 stack: 0x417f86a5fad8
    cpu4:137633099)0x417f86a5fad8:[0x41801de3fd1d]FDSAsyncTokenIODone@vmkernel:nover+0xdc stack: 0x0
    cpu4:137633099)0x417f86a5fbd8:[0x41801de4faa9]SCSICompleteDeviceCommand@vmkernel:nover+0xdf0 stack: 0x41027f480040

    This issue is resolved in this release.
  • Accessing corrupted metadata on VMFS3 volume might result in ESXi host failure
    If a file’s metadata is corrupted on a VMFS3 volume, ESXi host might fail with a purple diagnostic screen while trying to access the file. VMFS file corruption is extremely rare but might be caused by external storage issues.

    This issue is resolved in this release.
  • Adding new ESXi host to a High Availability cluster and subsequently reconfiguring the cluster might result in the failure of any other host in the cluster with purple diagnostic screen
    When a new ESXi host is added to a High Availability (HA) cluster and the HA cluster is subsequently reconfigured, any other host in the existing HA cluster might fail with a purple diagnostic screen and an error message similar to the following:

    0x4122264c7ce8:[0x41802a07e023]NFSLock_GetLease@<None>#<None>+0x2e stack: 0x410023ce8570.

    The reboot of the failed ESXi host might result in a similar failure of the host.
    This issue occurs when all the following scenarios are true in the HA environment:
    • The value of the Network File System (NFS) option LockRenewMaxFailureNumber, which determines the number of lock update failures that must occur before the lock is marked as stale, is changed from 1 to 0.
    • The value of the NFS option LockUpdateTimeout, which determines the amount of time before a lock update request is aborted, is changed from 1 to 0.
    • Any of the hosts tries to acquire the lock on the NFS volume.

    This issue is resolved in this release.

  • ESXi host might fail with a purple diagnostic screen due to race conditions in VMkernel resulting in null pointer exceptions
    Null pointer exceptions result in the failure of ESXi host with a purple diagnostic screen due to race conditions in VMkernel.
    These race conditions might occur when you are performing certain file system operations on Device File System(DevFS) in the ESXi host.

    This issue is resolved in this release.
  • Upgrade from ESX/ESXi 4.x or earlier to ESXi 5.0 or later might fail if the default MaxHeapSizeMB configuration option for VMFS has been changed
    An ESX host might fail to respond after upgrade from ESX 4.x or earlier to ESX 5.0 or later with the following error:

    hostCtlException: Unable to load module /usr/lib/vmware/vmkmod/vmfs3: Failure

    This issue might occur if you set the MaxHeapSizeMB configuration option for VMFS manually on an ESX/ESXi host prior to the upgrade.

    This issue is resolved with this release.

  • When the quiesced snapshot operation fails the redo logs are not consolidated
    When you attempt to take a quiesced snapshot of a virtual machine, if the snapshot operation fails towards the end of its completion, the redo logs created as part of the snapshot are not consolidated. The redo logs might consume a lot of datastore space.

    This issue is resolved in this release. If the quiesced snapshot operation fails, the redo log files are consolidated.

  • Virtual machines residing on NFS datastores are inaccessible in ESXi 5.1 host after vSphere Storage Appliance (VSA) 5.1 exits maintenance mode
    The hostd agent reports the status of virtual machines incorrectly to vCenter and the ESXi host after you disconnect the NFS storage as it goes over the Misc.APDTimeout value. This issue is observed on powered off virtual machines.

    This issue is resolved in this release.

  • Changed block tracking might return full thin-provisioned disk
    On VMFS3 file system holding large thin disk greater than 800GB, the changed block tracking ID is incorrect due to restriction of hardcoded IOCTLs. Hence, QueryChangedDiskAreas call returns incorrect disk areas and this results in increased backup time.

    This issue is resolved in this release by removing the limitation on the number of IOCTL calls for the physical disk mapping so that the number of changed disk areas are correctly computed. It is especially noticeable on Linux ext2 and ext3 file systems.

  • ESXi hosts might fail with a purple diagnostic screen in ATA driver
    A race condition in ata driver can cause it to fail and ESXi hosts might fail with a purple diagnostic screen and and might display error messages similar to the following:

    BUG: failure at vmkdrivers/src_9/drivers/ata/libata-core.c:5833/ata_hsm_move()! (inside vmklinux)
    Panic_vPanic@vmkernel#nover+0x13 stack: 0x3000000010, 0x412209a87e00
    vmk_PanicWithModuleID@vmkernel#nover+0x9d stack: 0x412209a87e20,0x4
    ata_hsm_move@com.vmware.libata#9.2.0.0+0xa0 stack: 0x0, 0x410017c03e
    ata_pio_task@com.vmware.libata#9.2.0.0+0xa5 stack: 0x0, 0x0, 0x53fec
    vmklnx_workqueue_callout@com.vmware.driverAPI#9.2+0x11a stack: 0x0,
    helpFunc@vmkernel#nover+0x568 stack: 0x0, 0x0, 0x0, 0x0, 0x0


    This issue is resolved in this release.

  • Adding a new hard disk to a virtual machine that resides on a Storage DRS enabled datastore cluster might result in Insufficient Disk Space error
    If the size of a virtual disk that you add to a virtual machine residing on a Storage DRS enabled datastore is greater than the free space available on the datastore, SDRS might migrate another virtual machine out of the datastore to allow sufficient free space for adding the virtual disk. The Storage vMotion operation is completed but the subsequent addition of the  virtual disk to the virtual machine might fail and an error message similar to the following might be displayed:

    Insufficient Disk Space

    This issue is resolved in this release.

  • iSCSI LUNs do not come back online after recovering from the APD state
    After recovering from the All-Paths-Down (APD) state, iSCSI LUNs do not come up until a host reboot. This issue occurs on Broadcom iSCSI offload-enabled adapters configured for iSCSI.
     
    This issue is resolved in this release.

Supported Hardware Issues

  • ESXi host might fail with a purple diagnostic screen if you run the vmware-vimdump command from DCUI
    When you run the vmware-vimdump command from Direct Console User Interface (DCUI), the ESXi host might fail with a purple diagnostic screen. This might also result in missed heartbeat messages. This issue does not occur when the command is run by connecting through an SSH console.

    This issue is resolved in this release.

Upgrade and Installation Issues

  • RDMs attached to Solaris virtual machines might be overwritten when ESXi hosts are upgraded using Update Manager
    When you are upgrading to ESXi 5.x by using Update Manager, an error in understanding the disk partition type might cause the RDMs attached to Solaris virtual machines to be overwritten. This might result in loss of data on the RDMs.

    This issue is resolved in this release.
  • Reinstallation of ESXi 5.1 does not remove the Datastore label of the local VMFS of an earlier installation
    Reinstallation of ESXi 5.1 with an existing local VMFS volume retains the Datastore label even after the user chooses the overwrite datastore option to overwrite the VMFS volume.

    This issue is resolved in this release.
  • ESXi 5.x scripted installation incorrectly warns that USB or SD media does not support VMFS, despite --novmfsondisk parameter in kickstart file
    If you perform a scripted installation to install ESXi 5.1 on a disk that is identified as USB or SD media, the installer might display the following warning message:

    The disk (<disk-id>) specified in install does not support VMFS.

    This message is displayed even if you have included the --novmfsondisk parameter for the install command in the kickstart file.

    This issue is resolved in this release.
  • When using AutoDeploy, the iPXE process might time out
    When you use AutoDeploy to boot ESXi 5.1 hosts, the iPXE process might time out while attempting to get a IP address from DHCP servers, and the AutoDeploy boot process stops abruptly.

    This issue is resolved in this release by increasing the total timeout period to 60 seconds.
  • Attempts to Upgrade ESXi host in a HA cluster might fail with vCenter Update Manager
    Upgrading an ESXi host in a High Availability (HA) cluster might fail with an error message similar to the following with vCenter Update Manager (VUM):
    the host returned esx update error code 7
    This issue occurs when multiple staging operations are performed with different baselines in Update Manager.

    This issue is resolved in this release.
  • Attempts to upgrade from ESX 4.1 with extents on the local datastore to ESXi 5.1 fails
    Upgrading from ESX 4.1 with extents on local datastore to ESXi 5.1 is not supported. However, the current upgrade process allows this upgrade behavior, and drops the extents on a local datastore without displaying any error or warning message.

    This issue is resolved in this release by adding a check on pre-check script to detect such a situation, a message is displayed to the user to terminate the upgrade or migrate.
  • Microsoft Windows Deployment Services (WDS) might fail to PXE boot virtual machines that use the VMXNET3 network adapter
    Attempts to PXE boot virtual machines that use the VMXNET3 network adapter by using the Microsoft Windows Deployment Services (WDS) might fail with messages similar to the following:

    Windows failed to start. A recent hardware or software change might be the cause. To fix the problem:
    1. Insert your Windows installation disc and restart your computer.
    2. Choose your language setting, and then click "Next."
    3. Click "Repair your computer."
    If you do not have the disc, contact your system administrator or computer manufacturer for assistance.

    Status: 0xc0000001

    Info: The boot selection failed because a required device is inaccessible
    .

    This issue is resolved this release.
  • Scripted ESXi installation or upgrade from an attached USB drive might fail if the file system type on any of the USB drive is not fat16 or fat32
    If multiple USB flash drives are attached to an ESXi host, scripted ESXi installation or upgrade by using the ks=usb boot option might fail with an exception error if the file system type on any of the the USB drives with MS-DOS partitioning is not fat16 or fat32.

    This issue is resolved in this release.
  • ESXi hosts might retain older version of the /etc/vmware/service/service.xml file after upgrade
    When you modify the etc/vmware/service/service.xml, which has a sticky bit set, and then perform an ESX/ESXi upgrade from 4.0 to 5.1, the old service.xml files causes compatibility issues. This happens because the old service.xml file is retained by the ESXi host even after the upgrade.

    This issue is resolved in this release.
  • ESXi 5.1 scripted installation with IPv6 details fails
    When you perform scripted installation of ESXi 5.1, with IPv6 details mentioned in ks.cfg, installation fails while validating the IPv6 details.

    This issue is resolved in this release.
  • Syslog.global.logDir and Syslog.global.logHost value not persistent after an ESXi host upgrade
    When you upgrade an ESXi host from version 4.x to 5.x, the value of Syslog.global.logDir and Syslog.global.logHost might not persist.

    This issue has been resolved in this release.
  • The third party VIBs tests should not run while ESXi 5.0 host is upgraded to 5.1
    When you perform an upgrade from ESXi 5.0 to ESXi 5.1 with third party VIBs from Power path, the ESXi installer and vSphere Update Manager should not display the third party VIBs warning messages.

    This issue is resolved in this release.
  • resxtop fails when upgraded from vSphere 5.0 to vSphere 5.1
    In vSphere 5.1, SSL certification checks are set to ON. This might cause resxtop to fail in connecting to hosts and displays an exception message similar the following:
    HTTPS_CA_FILE or HTTPS_CA_DIR not set.

    This issue is resolved in this release.

  • Upgrading from ESX Server 4.0 Update 4 to ESXi Server 5.x might result in an error if the password for the vpxuser account is not set with MD5 encryption
    When you upgrade from ESX Server 4.0 Update 4 to ESXi Server 5.x by using VMware Update Manager (VUM), an error message similar to the following might be displayed:

    Password for user vpxuser was not set with MD5 encryption. Only the first 8 characters will matter. Upon reboot, please reset the password.

    This issue occurs if the password for vpxuser account is not set with MD5 encryption.

    This issue is resolved in this release.

vCenter Server and vSphere Client Issues

  • VMRC and vSphere Client might stop responding when connected to a failed virtual machine
    On an ESXi 5.1 host, VMware Remote Console (VMRC) and vSphere Client might stop responding when connected to a failed virtual machine or virtual machine with failed VMware Tools.

    This issue is resolved in this release.

Virtual Machine Management Issues

  • Unable to deploy virtual machines using template files
    You might be unable to deploy virtual machine using the template files. This issue occurs when the template files and .vmtx files are not updated after the datastore resignature process is performed.

    This issue is resolved in this release by updating the .vmtx virtual machine files of the template with the latest resignatured datastore path.

  • Virtual machine power on operation might fail with VMFS heap out of memory warning messages
    If a virtual machine has more than 18 virtual disks, each greater than 256GB, ESXi host might be unable to power on the virtual machine. A warning message similar to the following might be logged in vmkernel.log:
    WARNING: Heap: 2525: Heap vmfs3 already at its maximum size. Cannot expand. WARNING: Heap: 2900: Heap_Align(vmfs3, 2099200/2099200 bytes, 8 align) failed. caller: 0x4180368c0b90

    This issue is resolved in this release.
  • Cannot create a quiesced snapshot after an independent disk is deleted from a virtual machine
    If an independent disk is deleted from a virtual machine, attempts to create a quiesced snapshot of a virtual machine might fail because as the disk mode data for a given SCSI node might be outdated.
    Error message similar to the following might be displayed:
    Status: An error occurred while quiescing the virtual machine. See the virtual machine's event log for details.

    The log files might contain entries similar to the following:
    HotAdd: Adding scsi-hardDisk with mode 'independent-persistent' to scsi0:1

    ToolsBackup: changing quiesce state: STARTED -> DONE SnapshotVMXTakeSnapshotComplete done with snapshot 'back': 0
    SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine. (40).

    This issue is resolved in this release.
  • Time synchronization with the ESXi server might result in an unexpected reboot of the guest operating system when an ESXi host is configured as an NTP server
    When an ESXi host is configured as an Network Time Protocol (NTP) server, the guest operating system might unexpectedly reboot during time synchronization with the ESXi host. This issue occurs when the virtual machine monitoring sensitivity level is set to High on a High Availability cluster and das.iostatsInterval option is set to False.

    This issue can be resolved by setting das.iostatsInterval option to True.
  • ESXi host might fail while performing certain virtual machine operations
    When you perform certain virtual machine operations, an issue related to metadata corruption of LUNs might sometimes cause an ESXi host to fail with a purple screen and display error messages similar to the following:
    @BlueScreen: #DE Exception 0 in world 4277:helper23-15 @ 0x41801edccb6e 3:21:13:31.624 cpu7:4277)Code start: 0x41801e600000 VMK uptime: 3:21:13:31.624 3:21:13:31.625 cpu7:4277)0x417f805afed0:[0x41801edccb6e]Fil3_DirIoctl@esx:nover+0x389 stack: 0x410007741f60 3:21:13:31.625 cpu7:4277)0x417f805aff10:[0x41801e820f99]FSS_Ioctl@vmkernel:nover+0x5c stack: 0x2001cf530 3:21:13:31.625 cpu7:4277)0x417f805aff90:[0x41801e6dcf03]HostFileIoctlFn@vmkernel:nover+0xe2 stack: 0x417f805afff0 3:21:13:31.625 cpu7:4277)0x417f805afff0:[0x41801e629a5a]helpFunc@vmkernel:nover+0x501 stack: 0x0 3:21:13:31.626 cpu7:4277)0x417f805afff8:[0x0] stack: 0x0

    This issue is resolved in this release. Metadata corruption of LUNs now results in an error message.

vMotion and Storage vMotion Issues

  • When you vMotion Windows Server 2008 virtual machines from ESX/ESXi 4.0 to ESXi 5.1 and then perform a storage vMotion the quiesced snapshots fail
    A storage vMotion operation on ESXi 5.1 by default sets disk.enableUUID to true for a Windows Server 2008 virtual machine, thus enabling application quiescing. Subsequent quiesce snapshot operation fails till the virtual machine undergoes a power cycle.

    This issue is resolved in this release.

VMware Tools Issues

  • VMware Tools might fail while taking a quiesced snapshot of a virtual machine
    If non-executable files are present in the backupScripts.d folder, VMware Tools might fail while taking a quiesced snapshot of a virtual machine.

    This issue is resolved in this release.
  • After VMware Tools installation the guest operating system name changes from Microsoft Windows Server 2012 (64-bit) to Microsoft Windows 8 (64-bit)
    After you create Microsoft Windows Server 2012 (64-bit) virtual machines and install VMware Tools, the guest operating system name changes from Microsoft Windows Server 2012 (64-bit) to Microsoft Windows 8 (64-bit).

    This issue is resolved in this release.
  • VMware Tools OSP packages have same distribution identifiers in their filenames
    VMware Tools OSP packages do not have distinguishable filenames for SUSE Linux Enterprise Server 11, Red Hat Enterprise Linux 5, and Red Hat Enterprise Linux 6 guest operating systems. As a result, it is difficult to deploy VMware Tools OSP packages using the Red Hat Satellite server.

    This issue is resolved in this release.
  • VMware Tools might leak memory in Linux guest operating system
    When multiple VLANs are configured for network interface in Linux guest operating system, VMware Tools might leak memory.

    This issue is resolved in this release.
  • File permissions of /etc/fstab might change after VMware Tools is installed
    When VMware Tools is installed on virtual machine such as SUSE Linux Enterprise Server 11 SP1, the file permission attribute of /etc/fstab might change from 644 to 600.

    This issue is resolved in this release.
  • VMware Tools on a Linux virtual machine might fail intermittently
    VMware Tools includes a shared library file named libdnet. When certain other software program such as Dell OpenManage software is installed, another shared library with the same name is created on the file system. When VMware Tools loads, it loads Dell OpenManage software's libdnet.so.1 library instead of VMware Tools libdnet.so. As a result the guest OS information might not be displayed in the Summary tab of vSphere Client, and the NIC information might also not be displayed.

    This issue is resolved in this release.
  • On an ESX/ESXi host earlier than version 5.1, upgrading only VMware Tools to version 5.1 results in a warning message
    On an ESX/ESXi host earlier than version 5.1 and with a virtual machine running Windows guest operating system, if you upgrade only VMware Tools to version 5.1, a warning message similar to the following might be displayed in Windows Event Viewer:
    [ warning] [vmusr:vmusr] vmware::tools::UnityPBRPCServer::Start: Failed to register with the host!

    This issue is resolved in this release.
  • Installation of View Agent results in an error message on reboot
    VMware Tools displays the following error message when you attempt to reboot after installing View Agent:

    VMWare Tools unrecoverable error: (vthread-3)

    This issue is resolved in this release.
  • Attempts to install VMware Tools might fail with Linux kernel version 3.7
    VMware Tools drivers are not compiled as the VMware Tools installation scripts are unable to identify the new kernel header path with Linux kernel version 3.7. This might cause VMware Tools installation to fail.

    This issue is resolved in this release.
  • Customization of guest operating system might fail when deployed from some non-English versions of Windows guest operating system templates
    Customization of guest operating system might fail when deployed from some non-English versions of Windows guest operating systems templates, such as the French version of Microsoft Windows 7, the Russian version of Microsoft Windows 7 and the French version of Microsoft Windows Server 2008 R2 guest operating systems. This issue occurs when the VMware Tools service vmtoolsd.exe fails.

    This issue has been resolved in this release.

  • Unable to display names and descriptions of the VM Processor or the VM Memory performance counters on Windows Vista or later guest operating systems
    When configuring a remote performance log on guest operating systems like Windows Vista or later running as an administrative user, the names and descriptions of the VM Processor and VM Memory counters might not be displayed in the Windows Performance Monitor (perfmon) console.
    This happens when the Windows guest operating system is installed with a locale different from en_us and de. This issue occurs with VMware Tools version 8.3.1.2.

    This issue is resolved in this release.

  • Pre-bulit modules (PBMs) is not available for Ubuntu 12.10 32-bit and 64-bit operating systems on ESXi 5.1 host
    You might not find the Pre-built Modules (PBMs) for Ubuntu 12.10 32-bit and 64-bit operating systems on ESXi 5.1 host.

    This issue is resolved in this release.

  • Virtual machines with vShield Endpoint Thin Agent might encounter performance-related problems when you copy network files to or from a CIFS share
    You might encounter performance-related problems with virtual machines while copying network files to or from a Common Internet File System (CIFS) share.
    This issue occurs when virtual machines running vShield Endpoint Thin Agent available from the VMware Tools bundle are used.

    This issue is resolved in this release.

  • Windows virtual machine running on an ESXi 5.0 host with vShield Endpoint and VMware Tools might display sharing violation errors
    In an environment where the vShield Endpoint component is bundled with VMware Tools, Windows virtual machine running on an ESXi 5.0 host might display sharing violation errors. An error message similar to the following might appear when you attempt to open a network file:

    Error opening the document. This file is already open or is used by another application.

    This issue is resolved in this release.
  • VMware Tools no longer installs earlier version of Visual C++ runtime
    For use with VDDK, VMware Tools sometimes installed an older version of Microsoft's Visual C++ runtime library on the backup proxy, even if a newer version was already installed. For example, VMware Tools installed runtime version 9.0.30729.4148 even when version 9.0.30729.6161 existed on the proxy virtual machine.

    This issue is resolved in this release. If a newer version of the Microsoft Visual C++ runtime library is available on a virtual machine, VMware Tools does not install an old version.

Known Issues

Known issues not previously documented are marked with the * symbol. The known issues are grouped as follows.

Installation and Upgrade Issues

  • Inventory objects might not be visible after upgrading a vCenter Server Appliance configured with Postgres database*
    When a vCenter Server Appliance configured with Postgres database is upgraded from 5.0 Update 2 to 5.1 Update 1, inventory objects such as datacenters, vDS and so on that existed before the upgrade might not be visible. This issue occurs when you use vSphere Web Client to connect to vCenter Server appliance.

    Workaround: Restart the Inventory service after upgrading vCenter Server Appliance.

  • For Auto Deploy Stateful installation, cannot use firstdisk argument of ESX on systems that have ESX/ESXi already installed on USB
    You configure the host profile for a host that you want to set up for Stateful Install with Auto Deploy. As part of configuration, you select USB as the disk, and you specify esx as the first argument. The host currently has ESX/ESXi installed on USB. Instead of installing ESXi on USB, Auto Deploy installs ESXi on the local disk.

    Workaround: None.

  • Auto Deploy PowerCLI cmdlets Copy-DeployRule and Set-DeployRule require object as input
    When you run the Copy-DeployRule or Set-DeployRule cmdlet and pass in an image profile or host profile name, an error results.

    Workaround: Pass in the image profile or host profile object.

  • Applying host profile that is set up to use Auto Deploy with stateless caching fails if ESX is installed on the selected disk
    You use host profiles to set up Auto Deploy with stateless caching enabled. In the host profile, you select a disk on which a version of ESX (not ESXi) is installed. When you apply the host profile, an error that includes the following text appears.
    Expecting 2 bootbanks, found 0

    Workaround: Remove the ESX software from the disk, or select a different disk to use for stateless caching.

  • vSphere Auto Deploy no longer works after a change to the IP address of the machine that hosts the Auto Deploy server
    You install Auto Deploy on a different machine than the vCenter Server, and change the IP address of the machine that hosts the Auto Deploy server. Auto Deploy commands no longer work after the change.

    Workaround: Restart the Auto Deploy server service.
    net start vmware-autodeploy-waiter
    If restarting the service does not resolve the issue, you might have to reregister the Auto Deploy server. Run the following command, specifying all options.
    autodeploy-register.exe -R -a vCenter-IP -p vCenter-Port -u user_name -w password -s setup-file-path

  • On HP DL980 G7, ESXi hosts do not boot through Auto Deploy when onboard NICs are used
    You cannot boot an HP DL980 G7 system using Auto Deploy if the system is using the onboard (LOM Netxen) NICs for PXE booting.

    Workaround: Install an add-on NIC approved by HP on the host, for example HP NC3 60T and use that NIC for PXE booting.

  • A live update with esxcli fails with a VibDownloadError
    A user performs two updates in sequence, as follows.

    1. A live install update using the esxcli software profile update or esxcli vib update command.
    2. A reboot required update.

    The second transaction fails. One common failure is signature verification, which can be checked only after the VIB is downloaded.

    Workaround: Resolving the issue is a two-step process.

    1. Reboot the ESXi host to clean up its state.
    2. Repeat the live install.

  • ESXi scripted installation fails to find the kickstart (ks) file on a CD-ROM drive when the machine does not have any NICs connected
    When the kickstart file is on a CD-ROM drive in a system that does not have any NICs connected, the installer displays the error message: Can't find the kickstart file on cd-rom with path <path_to_ks_file>.

    Workaround: Reconnect the NICs to establish network connection, and retry the installation.

  • Scripted installation fails on the SWFCoE LUN
    When the ESXi installer invokes installation using the kickstart (ks) file, all the FCoE LUNs have not yet been scanned and populated by the time installation starts. This causes the scripted installation on any of the LUNs to fail. The failure occurs when the https, http, or ftp protocol is used to access the kickstart file.

    Workaround: In the %pre section of the kickstart file, include a sleep of two minutes:
    %pre --interpreter=busybox
    sleep 120

  • Potential problems if you upgrade vCenter Server but do not upgrade Auto Deploy server
    When you upgrade vCenter Server, vCenter Server replaces the 5.0 vSphere HA agent (vmware-fdm) with a new agent on each ESXi host. The replacement happens each time an ESXi host reboots. If vCenter Server is not available, the ESXi hosts cannot join a cluster.

    Workaround: If possible, upgrade the Auto Deploy server.
    If you cannot upgrade the Auto Deploy server, you can use Image Builder PowerCLI cmdlets included with vSphere PowerCLI to create an ESXi 5.0 image profile that includes the new vmware-fdm VIB. You can supply your hosts with that image profile.

    1. Add the ESXi 5.0 software depot and add the software depot that contains the new vmware-fdm VIB.
      Add-EsxSoftwareDepot C:\Path\VMware-Esxi-5.0.0-buildnumber-depot.zip Add-EsxSoftwareDepot http://vcenter server/vSphere-HA-depot
    2. Clone the existing image profile and add the vmware-fdm VIB.
      New-EsxImageProfile -CloneProfile "ESXi-5.0.0-buildnumber-standard" -name "Imagename" Add-EsxSoftwarePackage -ImageProfile "ImageName" -SoftwarePackage vmware-fdm
    3. Create a new rule that assigns the new image profile to your hosts and add the rule to the ruleset.
      New-DeployRule -Name "Rule Name" -Item "Image Name" -Pattern "my host pattern" Add-DeployRule -DeployRule "Rule Name"
    4. Perform a test and repair compliance operation for the hosts.
      Test-DeployRuleSetCompliance Host_list

  • If Stateless Caching is turned on, and the Auto Deploy server becomes unavailable, the host might not automatically boot using the stored image
    In some cases, a host that is set up for stateless caching with Auto Deploy does not automatically boot from the disk that has the stored image if the Auto Deploy server becomes unavailable. This can happen even if the boot device that you want is next in logical boot order. What precisely happens depends on the server vendor BIOS settings.

    Workaround: Manually select the disk that has the cached image as the boot device.

  • During upgrade of ESXi 5.0 hosts to ESXi 5.1 with ESXCLI, VMotion and Fault Tolerance (FT) logging settings are lost
    On an ESXi 5.0 host, you enable vMotion and FT for a port group. You upgrade the host by running the command esxcli software profile update. As part of a successful upgrade, the vMotion settings and the logging settings for Fault Tolerance are returned to the default settings, that is, disabled.

    Workaround: Use vSphere Upgrade Manager to upgrade the hosts, or return vMotion and Fault Tolerance to their pre-upgrade settings manually.

Networking Issues
  • On an SR-IOV enabled ESXi host, vitual machines associated with virtual functions might fail to start
    When SR-IOV is enabled on ESXi 5.1 hosts with Intel ixgbe NICs and if several virtual functions are enabled in this environment, some virtual machines might fail to start.
    Messages similar to the following are displayed in the vmware.log file:
    2013-02-28T07:06:31.863Z| vcpu-1| I120: Msg_Post: Error
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ PCIPassthruChangeIntrSettings: 0a:17.3 failed to register interrupt (error code 195887110)
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5122262e-ab950f8e-cd4f-b8ac6f917d68/VMLibRoot/VMLib-RHEL6.2-64-HW7-default-3-2-1361954882/vmwar
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support.
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86]
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support".
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement.

    Workaround: Reduce the number of virtual functions associated with the affected virtual machine and start it.

  • System stops responding during TFTP/HTTP transfer when provisioning ESXi 5.1 or 5.0 U1 with Auto Deploy
    When provisioning ESXi 5.1 or 5.0 U1 with Auto Deploy on Emulex 10GbE NC553i FlexFabric 2 Ports using the latest open-source gPXE, the system stops responding during TFTP/HTTP transfer.

    Emulex 10GbE PCI-E controllers are memory-mapped controllers. The PXE/UNDI stack running on this controller must switch to big real mode from real mode during the PXE TFTP/HTTP transfer to program the device-specific registers located above 1MB in order to send and receive packets through the network. During this process, CPU interrupts are inadvertently enabled, which causes the system to stop responding when other device interrupts are generated during the CPU mode switching.

    Workaround: Upgrade the NIC firmware to build 4.1.450.7 or later.

  • Changes to the number of ports on a standard virtual switch do not take effect until host is rebooted
    When you change the number of ports on a standard virtual switch, the changes do not take effect until you reboot the host. This differs from the behavior with a distributed virtual switch, where changes to the number of ports take effect immediately.

    When changing the number of ports on a standard virtual switch, ensure that the total number of ports on the host, from both standard and distributed switches, does not exceed 4096.

    Workaround: None.

  • Administrative state of a physical NIC not reported properly as down
    Administratively setting a physical NIC state to down does not conform to IEEE standards. When a physical NIC is set to down through the virtual switch command, it causes two known problems:

    • ESXi experiences a traffic increase it cannot handle that wastes network resources at the physical switch fronting ESXi and in resources in ESXi itself.

    • The NIC behaves in an unexpected way. Operators expect to see the NIC powered down, but the NIC displays as still active.

    VMware recommends using the using ESXCLI network down -n vmnicN command with the following caveats:
    • This command turns off the driver only. It does not power off the NIC. When the ESXi physical network adapter is viewed from the management interface of the physical switch fronting the ESXi system, the standard switch uplink still appears to be active.

    • The administrative state of a NIC is not visible in the ESXCLI or UI. You must remember when debugging to check the state by examining /etc/vmware/esx.conf.

    • The SNMP agent report administrative state, however it will report incorrectly if the NIC was set to down when the operational state was down to begin with. It reports the admin state correctly if the NIC was set to down when the operational state was active.

    Workaround: Change the administrative state on the physical switch fronting the ESXi system to down instead of using the virtual switch command.

  • Linux driver support changes
    Device drivers for VMXNET2 or VMXNET (flexible) virtual NICs are not available for virtual machines running Linux kernel version 3.3 and later.

    Workaround: Use a VMXNET3 or e1000 virtual NIC for virtual machines running Linux kernel version 3.3 and later.

  • vSphere 5.0 network I/O control bandwidth allocation is not distributed fairly across multiple uplinks
    In vSphere 5.0, if a networking bandwidth limit is set on a resource pool while using network I/O control, this limit is enforced across a team of uplinks at the host level. This bandwidth cap is implemented by a token distribution algorithm that is not designed to fairly distribute bandwidth between multiple uplinks.

    Workaround: vSphere 5.1 network I/O control limits have been narrowed to a per uplink basis.

  • Mirrored Packet Length setting could cause a remote mirroring source session not to function
    When you configure a remote mirroring source session with the Mirrored Packet Length option set, the destination does not receive some mirrored packets. However, if you disable the option, packets are again received.
    If the Mirrored Packet Length option is set, packets longer than the specified length are truncated and packets are dropped. Lower layer code will not do fragmentation and recalculate checksum for the dropped packets. Two things might cause packets to drop:

    • The Mirrored Packet Length is greater than the maximum transmission unit (MTU)
      If TSO is enabled in your environment, the original packets could be very large. After being truncated by the Mirrored Packet Length, they are still larger than the MTU, so they are dropped by the physical NIC.

    • Intermediate switches perform L3 check
      Some truncated packets can have the wrong packet length and checksum. Some advanced physical switches check L3 information and drop invalid packets. The destination does not receive the packets.

Workaround:

    • If TCP Segmentation Offload (TSO) is enabled, disable the Mirrored Packet Length option.

    • You can enable or disable L3 check on some switches, such as Cisco's 4500 series switch. If these switches are in use, disable the L3 check. For switches that cannot be configured, disable the Mirrored Packet Length option.

  • Enabling more than 16 VMkernel network adapters causes vMotion to fail
    vSphere 5.x has a limit of 16 VMkernel network adapters enabled for vMotion per host. If you enable more than 16 VMkernel network adapters for vMotion on a given host, vMotion migrations to or from that host might fail. An error message in the VMkernel logs on ESXi says Refusing request to initialize 17 stream ip entries, where the number indicates how many VMkernel network adapters you have enabled for vMotion.

    Workaround: Disable vMotion VMkernel network adapters until only a total of 16 are enabled for vMotion.

  • vSphere network core dump does not work when using a nx_nic driver in a VLAN environment
    When network core dump is configured on a host that is part of a VLAN, network core dump fails when the NIC uses a QLogic Intelligent Ethernet Adapters driver (nx_nic). Received network core dump packets are not tagged with the correct VLAN tag if the uplink adapter uses nx_nic.

    Workaround: Use another uplink adapter with a different driver when configuring network coredump in a VLAN.

  • If the kickstart file for a scripted installation calls a NIC already in use, the installation fails
    If you use a kickstart file to set up a management network post installation, and you call a NIC that is already in use from the kickstart file, you see the following error message: Sysinfo error on operation returned status: Busy. Please see the VMkernel log for detailed error information.

    The error is encountered when you initiate a scripted installation on one system with two NICs: a NIC configured for SWFCoE/SWiSCSI, and a NIC configured for networking. If you use the network NIC to initiate the scripted installation by providing either netdevice=<nic> or BOOTIF=<MAC of the NIC> at boot-options, the kickstart file uses the other NIC, netdevice=<nic configured for SWFCoE / SWiSCSI>, in the network line to configure the management network.

    Installation (partitioning the disks) is successful, but when the installer tries to configure the management-network for the host with the network parameters provided in the kickstart file, it fails because the NIC was in use by SWFCoE/SWiSCSI.

    Workaround: Use an available NIC in the kickstart file for setting up a management network after installation.

  • Virtual machines running ESX that also use VMXNET3 as the pNIC might crash
    Virtual machines running ESX as a guest that also use VMXNET3 as the pNIC might crash because support for VMXNET3 is experimental. The default NIC for an ESX virtual machine is e1000, so this issue is encountered only when you override the default and choose VMXNET3 instead.

    Workaround: Use e1000 or e1000e as the pNIC for the ESX virtual machine.

  • Error message is displayed when a large number of dvPorts is in use
    When you power on a virtual machine with dvPort on a host that already has a large number of dvPorts in use, an Out of memory or Out of resources error is displayed. This can also occur when you list the switches on a host using an esxcli command.

    Workaround: Increase the dvsLargeHeap size.

    1. Change the host's advanced configuration option:
      • esxcli command: esxcfg-advcfg -s /Net/DVSLargeHeapMaxSize 100
      • Virtual Center: Browse to Host configuration -> Software Panel -> Advanced Settings -> Under "Net", change the DVSLargeHeapMaxSize value from 80 to 100.
      • vSphere 5.1 Web Client: Browse to Manage host -> Settings -> Advanced System Settings -> Filter. Change the DVSLargeHeapMaxSize value from 80 to 100.
    2. Capture a host profile from the host. Associate the profile with the host and update the answer file.
    3. Reboot the host to confirm the value is applied.

    Note: The max value for /Net/DVSLargeHeapMaxSize is 128.

    Please contact VMware Support if you face issues during a large deployment after changing /Net/DVSLargeHeapMaxSize to 128 and logs display either of the the following error messages:

    Unable to Add Port; Status(bad0006)= Limit exceeded

    Failed to get DVS state from vmkernel Status (bad0014)= Out of memory

  • ESXi fails with Emulex BladeEngine-3 10G NICs (be2net driver)
    ESXi might fail on systems that have Emulex BladeEngine-3 10G NICs when a vCDNI-backed network pool is configured using VMware vCloud Director. You must obtain an updated device driver from Emulex when configuring a network pool with this device.

    Workaround: None.

Storage Issues

  • RDM LUNs get detached from virtual machines that migrate from VMFS datastore to NFS datastore
    If you use the vSphere Web Client to migrate virtual machines with RDM LUNs from VMFS datastore to NFS datastore, the migration operation completes without any error or warning messages, but the RDM LUNs get detached from the virtual machine after migration. However, the migration operation creates a vmdk file with size same as that of RDM LUN on NFS datastore, to replace the RDM LUN.
    If you use vSphere Client, an appropriate error message is displayed in the compatibility section of the migration wizard.

    Workaround: None
  • VMFS5 datastore creation might fail when you use an EMC Symmetrix VMAX/VMAXe storage array
    If your ESXi host is connected to a VMAX/VMAXe array, you might not be able to create a VMFS5 datastore on a LUN presented from the array. If this is the case, the following error will appear: An error occurred during host configuration. The error is a result of the ATS (VAAI) portion of the Symmetrix Enginuity Microcode (VMAX 5875.x) preventing a new datastore on a previously unwritten LUN.

    Workaround:

    1. Disable Hardware Accelerated Locking on the ESXi host.
    2. Create a VMFS5 datastore.
    3. Reenable Hardware Accelerated Locking on the host.

    Use the following tasks to disable and reenable the Hardware Accelerated Locking parameter.

    In the vSphere Web Client

    1. Browse to the host in the vSphere Web Client navigator.
    2. Click the Manage tab, and click Settings.
    3. Under System, click Advanced System Settings.
    4. Select VMFS3.HardwareAcceleratedLocking and click the Edit icon.
    5. Change the value of the VMFS3.HardwareAcceleratedLocking parameter:
      • 0 disabled
      • 1 enabled

    In the vSphere Client

    1. In the vSphere Client inventory panel, select the host.
    2. Click the Configuration tab, and click Advanced Settings under Software.
    3. Change the value of the VMFS3.HardwareAcceleratedLocking parameter:
      • 0 disabled
      • 1 enabled

  • Attempts to create a GPT partition on a blank disk might fail when using Storagesystem::updateDiskPartitions()
    You can use the Storagesystem::computeDiskPartitionInfo API to retrieve disk specification, and then use the disk specification to label the disk and create a partition with Storagesystem::updateDiskPartitions(). However, if the disk is initially blank and the target disk format is GPT, your attempts to create the partition might fail.

    Workaround: Use DatastoreSystem::createVmfsDatastore instead to label and partition a blank disk, and to create a VMFS5 datastore.

  • Attempts to create a diagnostic partition on a GPT disk might fail
    If a GPT disk has no partitions, or the tailing portion of the disk is empty, you might not be able to create a diagnostic partition on the disk.

    Workaround: Avoid using GPT-formatted disks for diagnostic partitions. If you must use an existing blank GPT disk for the diagnostic partition, convert the disk to the MBR format.

    1. Create a VMFS3 datastore on the disk.
    2. Remove the datastore.

    The disk format changes from GPT to MBR.

  • ESXi cannot boot from a FCoE LUN that is larger than 2TB and accessed through an Intel FCoE NIC
    When you install ESXi on a FCoE boot LUN that is larger than 2TB and is accessed through an Intel FCoE NIC, the installation might succeed. However, when you attempt to boot your ESXi host, the boot fails. You see the error messages: ERROR: No suitable geometry for this disk capacity! and ERROR: Failed to connect to any configured disk! at BIOS time.

    Workaround: Do not install ESXi on a FCoE LUN larger than 2TB if it is connected to the Intel FCoE NIC configured for FCoE boot. Install ESXi on a FCoE LUN that is smaller than 2TB.

Server Configuration Issues

  • Applying host profiles might fail when accessing VMFS folders through console
    If a user is accessing the VMFS datastore folder through the console at the same time a host profile is being applied to the host, the remediation or apply task might fail. This failure occurs when stateless caching is enabled on the host profile or if an auto deploy installation occurred.

    Workaround: Do not access the VMFS datastore through the console while remediating the host profile.

  • Leading white space in login banner causes host profile compliance failure
    When you edit a host profile and change the text for the Login Banner (Message of the Day) option, but add a leading white space in the banner text, a compliance error occurs when the profile is applied. The compliance error Login banner has been modified appears.

    Workaround: Edit the host profile and remove the leading white space from the Login Banner policy option.

  • Host profile extracted from ESXi 5.0 host fails to apply to ESX 5.1 host with Active Directory enabled
    When applying a host profile with Active Directory enabled that was originally extracted from an ESXi 5.0 host to an ESX 5.1 host, the apply task fails. Setting the maximum memory size for the likewise system resource pool might cause an error to occur. When Active Directory is enabled, the services in the likewise system resource pool consume more than the default maximum memory limit for ESXi 5.0 captured in an ESXi 5.0 host profile. As a result, applying an ESXi 5.0 host profile fails during attempts to set the maximum memory limit to the ESXi 5.0 levels.

    Workaround: Perform one of the following:

    • Manually edit the host profile to increase the maximum memory limit for the likewise group.
      1. From the host profile editor, navigate to the Resource Pool folder, and view host/vim/vmvisor/plugins/likewise.
      2. Modify the Maximum Memory (MB) setting from 20 (the ESXi 5.0 default) to 25 (the ESXi 5.1 default).
    • Disable the subprofile for the likewise group. Do one of the following:
      • In the vSphere Web Client, edit the host profile and deselect the checkbox for the Resource Pool folder. This action disables all resource pool management. You can disable this specifically for the host/vim/vmvisor/plugins/likewise item under the Resource Pool folder.
      • In the vSphere Client, right-click the host profile and select Enable/Disable Profile Configuration... from the menu.

  • Host gateway deleted and compliance failures occur when ESXi 5.0.x host profile re-applied to stateful ESXi 5.1 host
    When an ESXi 5.0.x host profile is applied to a freshly installed ESXi 5.1 host, the profile compliance status is noncompliant. After applying the same profile again, it deletes the host's gateway IP and the compliance status continues to show as noncompliant with the IP route configuration doesn't match the specification status message.

    Workaround: Perform one of the following workarounds:

    • Login to the host through DCUI and add the default gateway manually with the following esxcli command:
      esxcli network ip route ipv4 add --gateway xx.xx.xx.xx --network yy.yy.yy.yy
    • Extract a new host profile from the ESX 5.1 host after applying the ESX 5.0 host profile once. Migrate the ESX 5.1 host to the new ESX 5.1-based host profile.

  • Compliance errors might occur after stateless caching enabled on USB disk
    When stateless caching to USB disks is enabled on a host profile, compliance errors might occur after remediation. After rebooting the host so that the remediated changes are applied, the stateless caching is successful, but compliance failures continue.

    Workaround: No workaround is available.

  • Hosts with large number of datastores time out while applying host profile with stateless caching enabled
    A host that has a large number of datastores times out when applying a host profile with stateless caching enabled.

    Workaround: Use the vSphere Client to increase the timeout:

    1. Select Administrator > vCenter Server Settings.
    2. Select Timeout Settings.
    3. Change the values for Normal Operations and Long Operations to 3600 seconds.

  • Cannot extract host profile from host when IPv4 is disabled on vmknics
    If you remove all IPv4 addresses from all vmknics, you cannot extract a host profile from that host. This action affects hosts provisioned with auto-deploy the most, as host profiles is the only way to save the host configuration in this environment.

    Workaround: Assign as least one vmknic to one IPv4 address.

  • Applying host profile fails when applying a host profile extracted from an ESXi 4.1 host on an ESXi 5.1 host
    If you set up a host with ESXi 4.1, extract a host profile from this host (with vCenter Server), and attempt to attach a profile to an ESXi 5.1 host, the operation fails when you attempt to apply the profile. You might receive the following error: NTP service turned off.

    The NTPD service could be running (on state) even without providing an NTP server in /etc/ntp.conf for ESXi 4.1. ESXi 5.1 needs an explicit NTP server for the service to run.

    Workaround: Turn on the NTP service by adding a valid NTP server in /etc/ntp.conf and restart the NTP daemon on the 5.1 host. Confirm that the service persists after the reboot. This action ensures the NTP service is synched for the host and the profile being applied to it.

  • Host profile shows noncompliant after profile successfully applied
    This problem occurs when extracting a host profile from an ESXi 5.0 host and applying it to an ESXi 5.1 host that contains a local SAS device. Even when the host profile remediation is successful, the host profile compliance shows as noncompliant.

    You might receive errors similar to the following:

    • Specification state absent from host: device naa.500000e014ab4f70 Path Selection Policy needs to be set to VMW_PSP_FIXED
    • Specification state absent from host: device naa.500000e014ab4f70 parameters needs to be set to State = on" Queue Full Sample Size = "0" Queue Full Threshold = "0"

    The ESXi 5.1 host profile storage plugin filters out local SAS device for PSA and NMP device configuration, while ESXi 5.0 contains such device configurations. This results in a missing device when applying the older host profile to a newer host.

    Workaround: Manually edit the host profile, and remove the PSA and NMP device configuration entries for all local SAS devices. You can determine if a device is a local SAS by entering the following esxcli command:
    esxcli storage core device list

    If the following line is returned, the device is a local SAS:
    Is Local SAS Device

  • Default system services always start on ESXi hosts provisioned with Auto Deploy
    For ESXi hosts provisioned with Auto Deploy, the Service Startup Policy in the Service Configuration section of the associated host profile is not fully honored. In particular, if one of the services that is turned on by default on ESXi has a Startup Policy value of off, that service still starts at the boot time on the ESXi host provisioned with Auto Deploy.

    Workaround: Manually stop the service after booting the ESXi host.

  • Information retrieval from VMWARE-VMINFO-MIB does not happen correctly after an snmpd restart
    Some information from VMWARE-VMINFO-MIB might be missing during SNMPWalk after you restart the snmpd daemon using /etc/init.d/snmpd restart from the ESXi Shell.

    Workaround: Do not use /etc/init.d/snmpd restart. You must use the esxcli system snmp set --enable command to start or stop the SNMP daemon. If you used /etc/init.d/snmpd restart to restart snmpd from the ESXi Shell, restart Hostd, either from DCUI or by using /etc/init.d/hostd restart from the ESXi Shell.

vCenter Server and vSphere Client Issues
  • Enabling or Disabling View Storage Accelerator might cause ESXi hosts to lose connectivity to vCenter Server
    If VMware View is deployed with vSphere 5.1, and a View administrator enables or disables View Storage Accelerator in a desktop pool, ESXi 5.1 hosts might lose connectivity to vCenter Server 5.1.

    The View Storage Accelerator feature is also called Content Based Read Caching. In the View 5.1 View Administrator console, the feature is called Host caching.

    Workaround: Do not enable or disable View Storage Accelerator in View environments deployed with vSphere 5.1.

Virtual Machine Management Issues
  • Virtual Machine compatibility upgrade from ESX 3.x and later (VM version 4) incorrectly configures the Windows virtual machine Flexible adapter to the Windows system default driver
    If you have a Windows guest operating system with a Flexible network adapter that is configured for the VMware Accelerated AMD PCnet Adapter driver, when you upgrade the virtual machine compatibility from ESX 3.x and later (VM version 4) to any later compatibility setting, for example, ESXi 4.x and later (VM version 7),Windows configures the flexible adapter to the Windows AMD PCNET Family PCI Ethernet Adapter default driver.
    This misconfiguration occurs because the VMware Tools drivers are unsigned and Windows picks up the signed default Windows driver. Flexible adapter network settings that existed before the compatibility upgrade are lost, and the network speed of the NIC changes from 1Gbps to 10Mbps.

    Workaround: Configure the Flexible network adapters to use the VMXNET driver from the Windows guest OS after you upgrade the virtual machine's compatibility. If your guest is updated with ESXi5.1 VMware Tools, the VMXNET driver is installed in the following location: C:\Program Files\Common Files\VMware\Drivers\vmxnet\.

  • When you install VMware Tools on a virtual machine and reboot, the network becomes unusable
    On virtual machines with CentOS 6.3 and Oracle Linux 6.3 operating systems, the network becomes unusable after a successful installation of VMware Tools and a reboot of the virtual machine. When you attempt to manually get the IP address from a DHCP server or set a static IP address from the command line, the error Cannot allocate memory appears.
    The problem is that the Flexible network adapter, which is used by default, is not a good choice for those operating systems.

    Workaround: Change the network adapter from Flexible to E1000 or VMXNET 3, as follows:

    1. Run the vmware-uninstall-tools.pl command to uninstall VMware Tools.
    2. Power off the virtual machine.
    3. In the vSphere Web Client, right-click the virtual machine and select Edit Settings.
    4. Click Virtual Hardware, and remove the current network adapter by clicking the Remove icon.
    5. Add a new Network adapter, and choose the adapter type E1000 or VMXNET 3.
    6. Power on the virtual machine.
    7. Reinstall VMware Tools.

  • Clone or migration operations that involve non-VMFS virtual disks on ESXi fail with an error
    No matter whether you use the vmkfstools command or the client to perform a clone, copy, or migration operation on the virtual disks of hosted formats, the operation fails with the following error message: The system cannot find the file specified.

    Workaround: To perform a clone, copy, or migration operation on the virtual disks of hosted formats, you need to load the VMkernel multiextent module into ESXi.

    1. Log in to ESXi Shell and load the multiextent module.
      # vmkload_mod multiextent
    2. Check if any of your virtual machine disks are of a hosted type. Hosted disks end with the -s00x.vmdk extension.
    3. Convert virtual disks in hosted format to one of the VMFS formats.
      1. Clone source hosted disk test1.vmdk to test2.vmdk.
        # vmkfstools -i test1.vmdk test2.vmdk -d zeroedthick|eagerzereodthick|thin
      2. Delete the hosted disk test1.vmdk after successful cloning.
        # vmkfstools -U test1.vmdk
      3. Rename the cloned vmfs type disk test2.vmdk to test1.vmdk.
        # vmkfstools -E test2.vmdk test1.vmdk
    4. Unload the multiextent module.
      # vmkload_mod -u multiextent

  • A virtual machine does not have an IP address assigned to it and does not appear operational
    This issue is caused by a LUN reset request initiated from a guest OS. This issue is specific to IBM XIV Fibre Channel array with software FCoE configured in ESXi hosts. Virtual machines that reside on the LUN show the following problems:

    • No IP address is assigned to the virtual machines.
    • Virtual machines cannot power on or power off.
    • No mouse cursor is showing inside the console. As a result, there is no way to control or interact with the affected virtual machine inside the guest OS.

    Workaround: From your ESXi host, reset the LUN where virtual machines that experience troubles reside.

    1. Run the following command to get the LUN's information:
      # vmkfstools -P /vmfs/volumes/DATASTORE_NAME
    2. Search for the following line in the output to obtain the LUN's UID:
      Partitions spanned (on 'lvm'): eui.001738004XXXXXX:1
      eui.001738004XXXXXX is the device UID.
    3. Run the following command to reset the LUN:
      # vmkfstools -L lunreset /vmfs/devices/disks/eui.001738004XXXXXX
    4. If a non-responsive virtual machine resides on a datastore that has multiple LUNs associated with it, for example, added extents, perform the LUN reset for all datastore extents.

Migration Issues
  • Attempts to use Storage vMotion to migrate multiple linked-clone virtual machines fail
    This failure typically affects linked-clone virtual machines. The failure occurs when the size of delta disks is 1MB and the Content Based Read Cache (CBRC) feature has been enabled in ESXi hosts. You see the following error message: The source detected that the destination failed to resume.

    Workaround: Use one of the following methods to avoid Storage vMotion failures:

    • Use 4KB as the delta disk size.

    • Instead of using Storage vMotion, migrate powered-off virtual machines to a new datastore.

VMware HA and Fault Tolerance Issues
  • Fault tolerant virtual machines crash when set to record statistics information on a vCenter Server beta build
    The vmx*3 feature allows users to run the stats vmx to collect performance statistics for debugging support issues. The stats vmx is not compatible when Fault Tolerance is enabled on a vCenter Server beta build.

    Workaround: When enabling Fault Tolerance, ensure that the virtual machine is not set to record statistics on a beta build of vCenter Server.

Supported Hardware Issues
  • PCI Unknown Unknown status is displayed in vCenter Server on the Apple Mac Pro server
    The hardware status tab in vSphere 5.1 displays Unknown Unknown for some PCI devices on the Apple Mac Pro. This is because of missing hardware descriptions for these PCI devices on the Apple Mac Pro. The display error in the hardware status tab does not prevent these PCI devices from functioning.

    Workaround: None.

  • PCI Unknown Unknown status is displayed in vCenter Server on the AMD PileDriver
    The hardware status tab in vSphere 5.1 displays Unknown Unknown for some PCI devices on the AMD PileDriver. This is because of missing hardware descriptions for these PCI devices on the AMD PileDriver. The display error in the hardware status tab does not prevent these PCI devices from functioning.

    Workaround: None.

  • DPM is not supported on the Apple Mac Pro server
    The vSphere 5.1 distributed power management (DPM) feature is not supported on the Apple Mac Pro. Do not add the Apple Mac Pro to a cluster that has DPM enabled. If the host enters "Standby" state, it fails to exit the standby state when the power on command is issued and displays an operation timed out error. The Apple Mac Pro cannot wake from the software power off command that is used by vSphere when putting a host in standby state.

    Workaround: If the Apple Mac Pro host enters "Standby" you must power on the host by physically pressing the power button.

  • IPMI is not supported on the Apple Mac Pro server
    The hardware status tab in vSphere 5.1 does not display correct data or there is missing data for some of the hardware components on the Apple Mac Pro. This is because IPMI is not supported on the Apple Mac Pro.

    Workaround: None.

Miscellaneous Issues
  • After a network or storage interruption, syslog over TCP, syslog over SSL, and storage logging do not restart automatically
    After a network or storage interruption, the syslog service does not restart automatically in certain configurations. These configurations include syslog over TCP, syslog over SSL, and the interrupt storage logging.

    Workaround: Restart syslog explicitly by running the following command:
    esxcli system syslog reload You can also configure syslog over UDP, which restarts automatically.

  • Windows Server 2012 Failover Clustering is not supported
    If you attempt to create a cluster for Failover Clustering in Windows Server 2012, and select to run validation tests, the wizard completes the validation tests with warnings, and after that returns to running the validation tests again. The wizard in the Windows Server 2012 guest operating system does not continue to the cluster creation stage.

    Workaround: None.