VMware

VMware ESXi 5.1 Update 3 Release Notes

VMware ESXi 5.1 Update 3 | 04 Dec 2014 | Build 2323236

Last updated: 02 Jul 2015

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of VMware ESXi contains the following enhancements:

  • Support for additional guest operating systems. This release updates support for many guest operating systems.
    For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.

  • Resolved Issues. This release also resolves a number of issues that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 5.1

Features and known issues of ESXi 5.1 are described in the release notes for each release. To view release notes for earlier releases of ESXi 5.1, see the

Internationalization

VMware vSphere 5.1 Update 3 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese

Compatibility and Installation

Upgrading vSphere Client

After you upgrade vCenter Server or the ESXi host to vSphere 5.1 Update 3 and attempt to connect to the vCenter Server or the ESXi host using a version of vSphere Client earlier than 5.1 Update 1b, you are prompted to upgrade the vSphere Client to vSphere Client 5.1 Update 3. The vSphere Client upgrade is mandatory. You must use only the upgraded vSphere Client to access vSphere 5.1 Update 3.

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, vCenter Server, the vSphere Web Client, and optional VMware products. In addition, go to this site for information about supported management and backup agents before installing ESXi or vCenter Server.

The vSphere Client and the vSphere Web Client are packaged with the vCenter Server and modules ZIP file. You can install one or both clients from the VMware vCenter Installer wizard.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 5.1.3 adds support for ESXi 5.1 Update 3 and vCenter Server 5.1 Update 3 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

Hardware Compatibility for ESXi

To determine which processors, storage devices, SAN arrays, and I/O devices are compatible with vSphere 5.1 Update 3, see the ESXi 5.1 Update 3 information in the VMware Compatibility Guide.

The list of supported processors is expanded for this release. To determine which processors are compatible with this release, see the VMware Compatibility Guide.

Third-Party Switch Compatibility for ESXi

VMware supports Cisco Nexus 1000V with vSphere 5.1. For more information about Cisco Nexus 1000V, see the Cisco Release Notes. As in previous vSphere releases, Cisco Application Virtual Switch (AVS) is not supported.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with ESXi 5.1 Update 3, see the ESXi 5.1 Update 3 information in the VMware Compatibility Guide.

Beginning with vSphere 5.1, support level changes for older guest operating systems have been introduced. For descriptions of each support level, see Knowledge Base article 2015161. The VMware Compatibility Guide provides detailed support information for all operating system releases and VMware product releases.

The following guest operating system releases are no longer supported by their respective operating system vendors. Future vSphere releases will not support these guest operating systems, although vSphere 5.1 Update 3 does support them.

  • Windows NT
  • All 16-bit Windows and DOS releases (Windows 98, Windows 95, Windows 3.1)
  • Debian 4.0 and 5.0
  • Red Hat Enterprise Linux 2.1
  • SUSE Linux Enterprise 8
  • SUSE Linux Enterprise 9 earlier than SP4
  • SUSE Linux Enterprise 10 earlier than SP3
  • SUSE Linux Enterprise 11 earlier than SP1
  • Ubuntu releases 8.04, 8.10, 9.04, 9.10 and 10.10
  • All releases of Novell Netware
  • All releases of IBM OS/2

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 5.1 Update 3. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are no longer supported. To use such virtual machines on ESXi 5.1 Update 3, upgrade the virtual machine hardware version. See the vSphere Upgrade documentation.

Installation Notes for This Release

See the vSphere Installation and Setup documentation for information about installing and configuring ESXi and vCenter Server.

Although the install process is straightforward, several subsequent configuration steps are essential. In particular, read the following:

Migrating Third-Party Solutions

You cannot directly migrate third-party solutions installed on an ESX or ESXi host as part of a host upgrade. Architectural changes between ESXi 5.0 and ESXi 5.1 result in the loss of third-party components and possible system instability. To accomplish such migrations, you can create a custom ISO file with Image Builder. For information about upgrading with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 5.1 Update 3 supports only CPUs with LAHF and SAHF CPU instruction sets. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5.1 Update 3. If your host hardware is not compatible, a purple screen appears with an incompatibility information message, and you cannot install or upgrade to vSphere 5.1 Update 3.

Upgrades for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

ESXi 5.1 Update 3 offers the following tools for upgrading ESX/ESXi hosts:

  • Upgrade interactively using an ESXi installer ISO image on CD-ROM, DVD, or USB flash drive. You can run the ESXi 5.1 Update 3 installer from a CD-ROM, DVD, or USB flash drive to perform an interactive upgrade. This method is appropriate for a small number of hosts.
  • Perform a scripted upgrade. You can upgrade or migrate from ESX/ESXi 4.x hosts, ESXi 5.0.x, and ESXi 5.1.x hosts to ESXi 5.1 Update 3 by invoking an update script, which provides an efficient, unattended upgrade. Scripted upgrades also provide an efficient way to deploy multiple hosts. You can use a script to upgrade ESXi from a CD-ROM or DVD drive, or by PXE-booting the installer.

  • vSphere Auto Deploy. If your ESXi 5.x host was deployed using vSphere Auto Deploy, you can use Auto Deploy to reprovision the host by rebooting it with a new image profile that contains the ESXi upgrade.

  • ESXCLI. You can update and apply patches to ESXi 5.0.x and ESXi 5.1.x hosts by using the ESXCLI command-line utility. This can be done either from a download depot on vmware.com or from a downloaded ZIP file of a depot that is prepared by a VMware partner. You cannot use esxcli to upgrade ESX or ESXi hosts to version 5.1.x from ESX/ESXi versions earlier than version 5.0.

Supported Upgrade Paths for Upgrade to ESXi 5.1 Update 3:

Upgrade Deliverables

Supported Upgrade Tools

Supported Upgrade Paths to ESXi 5.1 Update 3

ESX/ESXi 4.0:
Includes
ESX/ESXi 4.0 Update 1
ESX/ESXi4.0 Update 2

ESX/ESXi4.0 Update 3
ESX/ESXi 4.0 Update 4

ESX/ESXi 4.1:
Includes
ESX/ESXi 4.1 Update 1
ESX/ESXi 4.1 Update 2

ESX/ESXi 4.1 Update 3

 

ESXi 5.0:
Includes
ESXi 5.0 Update 1

ESXi 5.0 Update 2
ESXi 5.0 Update 3

ESXi 5.1:
Includes
ESXi 5.1 Update 1
ESXi 5.1 Update 2

VMware-VMvisor-Installer-5.1.0.update03-2323236.x86_64.iso

 

  • VMware vCenter Update Manager
  • CD Upgrade
  • Scripted Upgrade

Yes

Yes

Yes

Yes

update-from-esxi5.1-5.1_update03.zip
  • VMware vCenter Update Manager
  • ESXCLI
  • VMware vSphere CLI

No

No

Yes*

Yes

Using patch definitions downloaded from VMware portal (online)

VMware vCenter Update Manager with patch baseline

No

No

No

Yes

*Note: Upgrade from ESXi 5.0.x to ESXi 5.1 Update 3 using update-from-esxi5.1-5.1_update03.zip is supported only with ESXCLI. You need to run the esxcli software profile update --depot=<depot_location> --profile=<profile_name> command to perform the upgrade. For more information, see the ESXi 5.1.x Upgrade Options topic in the vSphere Upgrade guide.

Open Source Components for VMware vSphere 5.1 Update 3

The copyright statements and licenses applicable to the open source software components distributed in vSphere 5.1 Update 3 are available at http://www.vmware.com/download/open_source.html. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent generally available release of vSphere.

Product Support Notices

  • vSphere Client. In vSphere 5.1, all new vSphere features are available only through the vSphere Web Client. The traditional vSphere Client will continue to operate, supporting the same feature set as vSphere 5.0, but not exposing any of the new features in vSphere 5.1 .

    vSphere 5.1 and its subsequent update and patch releases are the last releases to include the vSphere Client. Future major releases of VMware vSphere will include only the vSphere Web Client.

    For vSphere 5.1, bug fixes for the traditional vSphere Client are limited to security or critical issues. Critical bugs are deviations from specified product functionality that cause data corruption, data loss, system crash, or significant customer application downtime where no workaround is available that can be implemented.

  • VMware Toolbox. vSphere 5.1 is the last release to include support for the VMware Tools graphical user interface, VMware Toolbox. VMware will continue to update and support the Toolbox command-line interface (CLI) to perform all VMware Tools functions.

  • VMI Paravirtualization. vSphere 4.1 was the last release to support the VMI guest operating system paravirtualization interface. For information about migrating virtual machines that are enabled for VMI so that they can run on later vSphere releases, see Knowledge Base article 1013842.

  • Windows Guest Operating System Customization. vSphere 5.1 is the last release to support customization for Windows 2000 guest operating systems. VMware will continue to support customization for later versions of Windows guests.

  • VMCI Sockets. Guest-to-guest communication between virtual machines are deprecated in the vSphere 5.1 release. This functionality will be removed in the next major release. VMware will continue support for host to guest communications.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi510-Update03 contains the following individual bulletins:

ESXi510-201412201-UG: Updates ESXi 5.1 esx-base vib
ESXi510-201412202-UG: Updates ESXi 5.1 tools-light vib
ESXi510-201412203-UG: Updates ESXi 5.1 scsi-megaraid-sas vib

Patch Release ESXi510-Update03 Security-only contains the following individual bulletins:

ESXi510-201412101-SG: Updates ESXi 5.1 esx-base vib

Patch Release ESXi510-Update03 contains the following image profiles:

ESXi-5.1.0-20141202001-standard
ESXi-5.1.0-20141202001-no-tools

Patch Release ESXi510-Update03 Security-only contains the following image profiles:

ESXi-5.1.0-20141201001s-standard
ESXi-5.1.0-20141201001s-no-tools

For information on patch and update classification, see KB 2014447.

Resolved Issues

This section describes resolved issues in this release:

Backup

  • When you use backup software, the list of allocated disk sectors returned might be incorrect and the incremental backups might appear to be corrupt or missing
    When you use backup software that uses the Virtual Disk Development Kit (VDDK) API call QueryChangedDiskAreas(), the list of allocated disk sectors returned might be incorrect and the incremental backups might appear to be corrupt or missing. A message similar to the following is written to vmware.log:
    DISKLIB-CTK: Resized change tracking block size from XXX to YYY
    For more information, see KB 2090639.
    This issue is resolved in this release.
  • VMware Tools might stop responding while opening the Network File System mounts
    When you take a quiesced snapshot on a Linux virtual machine, VMware Tools might stop responding while opening the Network File System (NFS) mounts and result in all filesystem activity to stop on the guest operating system. An error message similar to the following is displayed and the virtual machine stops responding:
    An error occurred while saving the snapshot: msg.snapshot.error-QUIESCINGERROR
    This issue is resolved in this release.
  • Attempt to restore a virtual machine might fail with an error
    Attempt to restore a virtual machine on an ESXi host using vSphere Data Protection might fail with an error message.
    An error message similar to the following is displayed:
    Unexpected exception received during reconfigure
    This issue is resolved in this release.
  • Changed Block Tracking is reset after Storage vMotion
    Performing Storage vMotion operation on vSphere 5.x resets Change Block Tracking (CBT).
    For more information, see KB 2048201.
    This issue is resolved in this release.
  • Red Hat Enterprise Linux virtual machines might stop responding while taking quiesced snapshots
    When you take quiesced snapshots to back up powered on virtual machines running Red Hat Enterprise Linux (RHEL), the virtual machines might stop responding and might not recover without a reboot. This issue occurs when you run some VIX commands while performing quiesced snapshot operation.
    This issue is resolved in this release.

CIM and API

  • False alarms appear in the Hardware Status tab of the vSphere Client
    After you upgrade Integrated Lights Out (iLO) firmware on HP DL980 G7, false alarms appear in the Hardware Status tab of the vSphere Client.
    Error messages similar to the following might be logged in the /var/log/syslog.log file:
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x8 FAILED cc=0xffffffff
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfcFruChassis: Reading FRU Chassis Info Area length for 0x0 FAILED
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfcFruBoard: Reading FRU Board Info details for 0x0 FAILED cc=0xffffffff
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x70 FAILED cc=0xffffffff
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfcFruProduct: Reading FRU product Info Area length for 0x0 FAILED
    2014-10-17T08:51:14Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: data length mismatch req=19,resp=3
    2014-10-17T08:51:15Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0001,resp=0002
    2014-10-17T08:51:17Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0002,resp=0003
    2014-10-17T08:51:19Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0003,resp=0004
    2014-10-17T08:51:19Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0004,resp=0005
    2014-10-17T08:51:20Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0005,resp=0006
    2014-10-17T08:51:21Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0006,resp=0007

    This issue is resolved in this release.
  • The return value of hrSWRunPerfCPU is reported incorrectly
    On an ESXi host that has SNMP enabled, the value of hrSWRunPerfCPU is reported incorrectly.
    This issue is resolved in this release.
  • Query operation fails when you query CIM_System using EMC PowerPath
    When PowerPath queries the CIM_System under /VMware/esxv2/, the operation fails and an error is reported from CIM server. The error is similar to the following:
    ThreadPool --- Failed to enqueue request. Too many queued requests already: vmwaLINUX
    ThreadPool --- Failed to enqueue request. Too many queued requests already: vmware_base, active 5, queued 11 .\TreeViewHostDiscovery.cpp 611
    This issue is resolved in this release.
  • Core dump from the ethernet provider is observed while updating sensor data
    While updating the sensor data in the Hardware Status tab on an IBM x3650M3 server, Small Footprint CIM Broker (SFCB) core dumps from the ethernet provider are observed. The Hardware Status tab does not display data even after multiple attempts.
    This issue is resolved in this release.
  • CIM indications might fail when you use Auto Deploy to reboot the ESXi hosts
    If the sfcbd service stops running, the CIM indications in host profile cannot be applied successfully.
    This issue is resolved in this release by ensuring that the CIM indications do not rely on the status of the sfcbd service while applying the host profile.
  • sfcbd service might not stop or restart under certain conditions
    When you stop and start the hardware monitoring service (sfcbd), which is responsible to provide hardware status information of the ESXi host, the process might stop with error messages similar to the following written to the syslog file and might not restart:
    sfcbd: Sending TERM signal to sfcbd
    sfcbd-watchdog: Sleeping for 20 seconds
    sfcbd: Sending TERM signal to sfcbd
    sfcbd-watchdog: Sleeping for 20 seconds
    sfcbd-watchdog: Waited for 3 20 second intervals, SIGKILL next
    sfcbd: Stopping sfcbd
    sfcbd-watchdog: Sleeping for 20 seconds
    sfcbd-watchdog: Providers have terminated, lets kill the sfcbd.
    sfcbd-watchdog: Reached max kill attempts. watchdog is exiting

    As a result, when you attempt to view the hardware status from the vCenter Server, an error message similar to the following might be displayed:
    The monitoring service on the esx host is not responding or not available.
    This issue is resolved in this release.
  • Web Based Enterprise Management (WBEM) queries might fail when you attempt to monitor the hardware health of an IPv6 enabled ESXi host
    WBEM queries might fail when you attempt to monitor the hardware health of an ESXi host that uses IPv6. An error message similar to the following is written to the syslog file:
    Timeout error accepting SSL connection exiting.
    To resolve this issue, a new configuration parameter httpSelectTimeout is added that lets you set the timeout value.
  • ESXi might send duplicate events to management software
    ESXi might send duplicate events to the management software when an Intelligent Platform Management Interface (IPMI) sensor event is triggered on the ESXi Host.
    This issue is resolved in this release.
  • ESXi host might display the power supply information as unknown
    When you attempt to view the hardware status of an ESXi host by connecting directly to the ESXi host or to the vCenter Server that manages the ESXi host, the power supply information might be displayed as unknown.
    This issue is resolved in this release.
  • Class CIM_NetworkPort query might report inconsistent results
    Attempt to monitor hardware using the Class CIM_NetworkPort query with CIM or WBEM services might report inconsistent values on ESXi 5.1.
    This issue is resolved in this release.
  • Monitoring an ESXi 5.1 host with Dell OpenManage might fail due to openwsmand error
    Monitoring an ESXi 5.1 host with Dell OpenManage might fail due to an openwsmand error. An error message similar to the following might be reported in syslog.log:
    Failed to map segment from shared object: No space left on device
    This issue is resolved in this release.
  • Hardware health monitoring might fail to respond
    Hardware health monitoring might fail to respond, and error messages similar to the following might be displayed by CIM providers:
    2014-02-25T02:15:34Z sfcb-CIMXML-Processor[233738]: PAM unable to dlopen(/lib/security/$ISA/pam_passwdqc.so): /lib/security/../../lib/security/pam_passwdqc.so: cannot open shared object file: Too many open files2014-02-25T02:15:34Z sfcb-CIMXML-Processor[233738]: PAM adding faulty module: /lib/security/$ISA/pam_passwdqc.so2014-02-25T02:15:34Z sfcb-CIMXML-Processor[233738]: PAM unable to dlopen(/lib/security/
    The SFCB service might also stop responding.
    This issue is resolved in this release.
  • Hostd might not respond when you view the health status of an ESXi host
    The hostd service might not respond when you connect vSphere Client to an ESXi host to view health status and perform a refresh action. A message similar to the following is written to hostd.log:
    YYYY-MM-DDThh:mm:ss.344Z [5A344B90 verbose \\\'ThreadPool\\\'] usage : total=22 max=74 workrun=22 iorun=0 workQ=0 ioQ=0 maxrun=30 maxQ=125 cur=W
    This issue is resolved in this release.
  • ESXi host might experience high I/O latency
    When large CIM requests are sent to the LSI SMI-S provider on an ESXi host, high I/O latency might occur on the ESXi host resulting in poor storage performance.
    This issue is resolved in this release.
  • sfcb might respond with incorrect method provider
    On an ESXi host, when you register two different method providers to the same CIM class with different namespaces, upon request, the sfcb always responds with the provider nearest to the top of providerRegister. This might be an incorrect method provider.
    This issue is resolved in this release.
  • Load kernel module might fail through CIM interfacer
    The LoadModule command might fail when using the CIM interface client to load the kernel module. An error message similar to the following is displayed:
    Access denied by VMkernel access control policy.
    This issue is resolved in this release.
  • If the PowerStateChangeRequest CIM method is invoked without passing values to any parameters, the ESXi host might not respond to this change request as expected
    On an ESXi 5.1 host, if you invoke the PowerStateChangeRequest CIM method without passing values to any parameters, the ESXi host might not respond to this change request and might not restart.
    This issue is resolved in this release.
  • Querying hardware status on vSphere Client might fail with an error
    Attempt to query the hardware status on vSphere Client might fail. An error message similar to the following is displayed in the /var/log/syslog.log file in the ESXi host:
    TIMEOUT DOING SHARED SOCKET RECV RESULT (1138472) Timeout (or other socket error) waiting for response from provider Header Id (16040) Request to provider 111 in process 4 failed. Error:Timeout (or other socket error) waiting for response from provider Dropped response operation details -- nameSpace: root/cimv2, className: OMC_RawIpmiSensor, Type: 0
    This issue is resolved in this release.
  • Status of some disks might be displayed as UNCONFIGURED GOOD instead of ONLINE
    On an ESXi 5.1 host, status of some disks might be displayed as UNCONFIGURED GOOD instead of ONLINE. This issue occurs for LSI controller using LSI CIM provider.
    This issue is resolved in this release.

Guest Operating System

  • Increasing the size of disk partition in MAC OS X guest operating system using Disk Utility on an ESXi host might fail
    You cannot increase the size of a disk partition in MAC OS X guest operating system using Disk Utility on an ESXi host. This issue does not occur when you attempt to increase the size using VMware Fusion.
    This issue is resolved in this release by changing the headers in the GUID partition table (GPT) after increasing the disk size.
  • Virtual machine might fail when the guest operating system attempts to access the 8-byte PCI MMIO config space
    If you configure a direct pass-through for a Graphical Processing Unit (GPU) on a virtual machine, the virtual machine might fail when the guest operating system attempts to access the 8-byte PCI MMIO config space.
    This issue is resolved in this release.
  • Guest operating system might become unresponsive when a virtual machine is started with a virtual IDE device
    Guest operating system might become unresponsive when a virtual machine is started with a virtual IDE device. An error message similar to the following is written to vmware.log:
    vcpu-0| W110: MONITOR PANIC: vcpu-0:NOT_IMPLEMENTED devices/vide/iovmk/videVMK-vmkio.c:1492.
    This issue is resolved in this release.

Miscellaneous

  • ESXi hosts might enter heap memory exhaustion state and cause an outage
    Under certain conditions, a userworld program might not function as expected and might lead to accumulation of large number of zombie processes in the system. As a result, the globalCartel heap might exhaust causing operations like vMotion and SSH to fail as new processes cannot be forked when an ESXi host is in the heap memory exhaustion state. The ESXi host might not exit this state until you reboot the ESXi host.
    Warning messages similar to the following might be written in the VMkernel log:
    2014-07-31T23:58:01.400Z cpu16:3256397)WARNING: Heap: 2677: Heap globalCartel-1 already at its maximum size. Cannot expand.
    2014-08-01T00:10:01.734Z cpu54:3256532)WARNING: Heap: 2677: Heap globalCartel-1 already at its maximum size. Cannot expand.
    2014-08-01T00:20:25.165Z cpu45:3256670)WARNING: Heap: 2677: Heap globalCartel-1 already at its maximum size. Cannot expand.

    This issue is resolved in this release.
  • VMware ESXi 5.1 host bootup alert in console and vmkwarnings
    An alert is displayed when switchToSwapMode is enabled only on the hosts which are or were connected to HA cluster at some point of time. An alert similar to the following is displayed:
    ALERT: Error while executing switchToSwapMode: Memory max less than memory already reserved by children

    For more information, see KB2056609.

    This issue is resolved in this release.
  • Running vm-support might cause excessive logging in the VMkernel log file
    Running vm-support might cause message similar to the following to be repeatedly written to vmkernel.log:
    VSI_ParamListAddString failed with Out of memory (ok to retry)
    This issue occurs while running vm-support on an ESXi host that has large virtual address space mappings.
    This issue is resolved in this release.
  • Deploying a VM with GPU virtualization might cause ESXi to fail with a purple screen
    Attempt to deploy a virtual machine with GPU virtualization might cause ESXi to fail with a purple diagnostic screen due to low memory in the virtual address space. Warning message similar to the following is displayed:
    WARNING: HeapMgr: No XMap space left
    ALERT: ERROR vmk_MapVA failure
    This issue is resolved in this release.
  • Using USB devices causes compliance failure System Cache Host Profile
    When using USB devices to complete the host profile application, or after applying the profile, might cause failure with the System Cache Host Profile.
    A message similar to the following is displayed:
    Specification state absent from host: device 'datastore' state needs to be set to 'on.
    This issue is resolved in this release.
  • ESXi host might fail with a purple diagnostic screen stating that a PCPU did not receive heartbeat
    An ESXi host might fail with a purple diagnostic screen stating that a PCPU did not receive heartbeat. A backtrace similar to the following is displayed:
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)Code start: 0xnnnnnnnnnnnnn VMK uptime: 98:23:15:54.570
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)Saved backtrace from: pcpu 6 Heartbeat
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x]SP_WaitLockIRQ@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x]LPage_ReapPools@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x]MemDistributeNUMAPolicy@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x41802644483a]MemDistributeAllocateAndTestPages@vmkernel#nover+0xnnn stack: 0xnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x418026444d63]MemDistributeAllocAndTestPagesLegacy@vmkernel#nover+0xaa stack: 0xnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0xnnnnnnnnnnnnn]MemDistribute_AllocUserWorldPages@vmkernel#nover+0xnn stack: 0xnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x]UserMemAllocPageInt@vmkernel#nover+0xnn stack: 0xnnnnnnnnnnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x]UserMem_HandleMapFault@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x]User_Exception@vmkernel#nover+0xnn stack: 0xnnnnnnnnnnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x]Int14_PF@vmkernel#nover+0xnn stack: 0xnnnnnnnnnnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)0xnnnnnnnnnnnnn:[0x]gate_entry@vmkernel#nover+0xnn stack: 0xnnnnnnnnnnnnn
    YYYY-MM-DDThh:mm:ss.uuZ cpu14:4110)base fs=0x0 gs=0xnnnnnnnnnnnnn Kgs=0x0
    This issue occurs when the ESXi host experiences a memory overload due to a burst of memory allocation requests.
    This issue is resolved in this release by optimizing the memory allocation path when a large page (lpage) memory pool is utilized.
  • VMkernel remote syslog messages might not match with the severity level log message
    VMkernel remote syslog messages might not match with the severity level log message.
    This issue is resolved in this release by tagging the following VMkernel syslog messages with appropriate severity level messages:
    system alerts ->'alert'
    warnings ->'warning'
    others ->'informational'
    Note: The vmkwarning.log file will contain syslog messages of severity level 'warning' and 'alert'.
  • Rx Ring #2 when out of memory might cause packet drops on the receiver side
    A Linux virtual machine enabled with Large Receive Offload (LRO) functionality on VMXNET3 device might experience packet drops on the receiver side when the Rx Ring #2 runs out of memory. This occurs when the virtual machine is handling packets generated by LRO. The Rx Ring#2 size is configurable with Linux Guest Operating System.
    This issue is resolved in this release.
  • Cloning a CBT enabled VM template from ESXi hosts might fail
    Attempt to clone a CBT enabled VM template simultaneously from two different ESXi 5.1 hosts might fail. An error message similar to the following is displayed:
    Failed to open VM_template.vmdk': Could not open/create change tracking file (2108).
    This issue is resolved in this release.
  • ESXi host might report unsupported ioctl error
    An ESXi host might report an error message when certain vSCSI filters query virtual disks. If the underlying device does not support unmap or ioctl, then warning messages which appears in the vmkernel.log file are not valid.
    An error similar to the following might be reported:
    WARNING: VSCSIFilter: 1452: Failed to issue ioctl to get unmap readback type: Not supported.
    For more information, see KB2058568.
    This issue is resolved in this release.
  • VMkernel interface binding might fail on vSphere 5.1
    vMotion VMkernel interface binding might fail on vSphere 5.1 after enabling FT VMkernel interface binding and system reboot. A warning message similar to the following is displayed:
    cpu10:4656)WARNING: MigrateNet: 601: VMotion server accept socket failed to bind to vmknic 'vmk1', as specified in /Migrate/Vmknic
    cpu8:4699)WARNING: VMKStateLogger: 9260: Failed to use SO_BINDTOVMK of sock 0x410024035eb0 vmk3: Operation not supported

    This issue is resolved in this release.
  • ESXi 5.x hosts with access to LUNs used by MSCS nodes as RDMs might take a long time to boot
    When you reboot ESXi 5.x hosts with access to Raw Device Mapped (RDM) LUNs used by MSCS nodes, the host deployed using the auto deploy option might take long time to boot. This happens even when perennially reserved flag is passed for the LUNs using host profile. This time depends on the number of RDMs that are attached to the ESXi host. For more information on perennially reserved flag, see KB1016106.
    This issue is resolved in this release.
  • ESXi 5.x host might fail with a purple diagnostic screen on AMD Opteron 6300 Series processors
    An ESXi 5.x host that uses Series 63xx AMD Opteron as per AMD erratum number 815 processors might become unresponsive with a purple screen. The purple screen mentions the text IDT_HandleInterrupt or IDT_VMMForwardIntr followed by an unexpected function as described in KB 2061211.
    This issue is resolved in this release.
  • Virtual machines might fail to power on when you add PCI devices of BAR size less than 4 KB as a passthrough device
    Virtual machines might fail to power on when you add PCI devices of Base Address Register (BAR) size less than 4 KB as passthrough devices. A message similar to the following is written to the vmware.log file:
    PCIPassthru: Device 029:04.0 barIndex 0 type 2 realaddr 0x97a03000 size 128 flags 0
    PCIPassthru: Device 029:04.0 barIndex 1 type 2 realaddr 0x97a01000 size 1024 flags 0
    PCIPassthru: Device 029:04.0 barIndex 2 type 2 realaddr 0x97a02000 size 128 flags 0
    PCIPassthru: Device 029:04.0 barIndex 3 type 2 realaddr 0x97a00000 size 1024 flags 0
    PCIPassthru: 029:04.0 : barSize: 128 is not pgsize multiple
    PCIPassthru: 029:04.0 : barSize: 1024 is not pgsize multiple
    PCIPassthru: 029:04.0 : barSize: 128 is not pgsize multiple
    PCIPassthru: 029:04.0 : barSize: 1024 is not pgsize multiple
    This issue is resolved in this release.

Networking

  • The return value of virtualDisk.throughput.usage in vCenter performance chart and esxtop are contradictory
    In vCenter performance chart, the virtualDisk.throughput.usage related value is in bytes, but the same value is returned in kilobytes in the esxtop tool.
    This issue is resolved in this release.
  • Attempts to power on a virtual machine on a host that already has a large number of dvPorts in use might fail with an error message
    When you power on a virtual machine with a dvPort on a host that already has a large number of dvPorts in use, an error similar to the following is written to hostd.log:
    Unable to Get DVPort list ; Status(bad0014)= Out of memory
    A warning is also written to the vmkernel.log similar to the following:
    WARNING: Heap: 2757: Heap dvsLargeHeap (132993984/134217728): Maximum allowed growth (1224704) too small for size (9269248)
    This issue is resolved in this release by adding a configure option. If you encounter this issue, change the value of the newly added option Net.DVSLargeHeapInitSize to 50M from the vCenter Server and then restart the ESXi host.
  • Microsoft Windows Deployment Services (WDS) might fail due to PXE boot virtual machines that use the VMXNET3 network adapter
    Attempts to PXE boot virtual machines that use the VMXNET3 network adapter by using the Microsoft Windows Deployment Services (WDS) might fail with messages similar to the following:
    Windows failed to start. A recent hardware or software change might be the cause. To fix the problem: 1. Insert your Windows installation disc and restart your computer.
    2. Choose your language setting, and then click "Next.".
    3. Click "Repair your computer.".
    If you do not have the disc, contact your system administrator or computer manufacturer for assistance. Status: 0xc0000001 Info: The boot selection failed because a required device is inaccessible.

    This issue is resolved in this release.
  • Burst of data packets sent by applications might be dropped due to limited queue size on a vDS or on a standard vSwitch
    On a vNetwork Distributed Switch (vDS) or on a standard vSwitch where the traffic shaping is enabled, burst of data packets sent by applications might drop due to limited queue size.
    This issue is resolved in this release.
  • Host Profile compliance check might fail with did not find mapping for ip x.x.x.x in /etc/hosts file
    Attempts to apply host profile on ESXi 5.1 might fail due to compliance failure even after successful application of the host profile.
    An error message similar to the following is displayed:
    Host is unavailable for checking compliance.
    This issue is resolved in this release.
  • Booting the ESXi 5.1 host using auto deploy option might fail
    Attempt to boot the ESXi 5.1 host using auto deploy option might fail. The failure happens when the host attempts to download the ESXi image, the iPXE goes into a loop and the process to download the ESXi image stops.
    This issue is resolved in this release.
  • Ethtool utility might report incorrect cable type for Emulex 10Gb Ethernet (10GbE) 554FLR-SFP adapter
    The ethtool utility might report an incorrect cable connection type for Emulex 10Gb Ethernet (10GbE) 554FLR-SFP adapter. This is because ethtool might not support Direct Attached Copper (DAC) port type.
    This issue is resolved in this release.
  • Incorrect result might be reported while performing a vSphere Distributed Switch (VDS) health check
    Incorrect results might be reported when you attempt to view the VDS Web Client health check to monitor health status for VLAN, MTU, and Teaming policies.
    This issue is resolved in this release.

Security

  • Update to glibc packages addresses multiple security issues
    The ESXi glibc-2.5 package is updated to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2013-0242, CVE-2013-1914, and CVE-2013-4332 to these issues.
  • Update to libxml2 library addresses multiple security issues
    The ESXi userworld libxml2 library is updated to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2013-2877 and CVE-2014-0191 to these issues.
  • Update to libcurl library addresses multiple security issues
    The ESXi userworld libcurl library has been updated to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2014-0015 and CVE-2014-0138.
  • vmx.log.rotateSize parameter is enabled in ESXi 5.1
    The vmx.log.rotateSize parameter was disabled in the previous ESXi releases, this parameter is enabled by VMX in ESXi 5.1. The vmx.log.rotateSize parameter is used to control the vmware.log file size.
  • Update to Transparent Page Sharing (TPS) management capabilities and new default behavior
    In ESXi 5.1 Update 3, the pshare salting for the Transparent Page Sharing (TPS) management capabilities that was introduced in the previous patch release is enabled by default. This means that TPS only applies to individual VM’s and that inter-VM TPS is disabled unless an administrator chooses to re-enable it.
    For more information, see KB 2097593.
  • Updates the Likewise Kerberos stack
    The Likewise 5.3 stack which has Kerberos v5-1.6.3 version has been updated.

Server Configuration

  • Host profiles compliance check might fail when rebooting hosts with NMP device configuration inconsistency listed in VMware ESXi 5.1.x
    A reboot of an ESXi host lists the Native Multipathing Plugin (NMP) device information in an arbitrary order. As the host profile compliance checker requires the device order to be sorted, the compliance check on hosts with such configuration might fail. The following compliance error message is displayed:
    Specification state absent from host: SATP configuration for device naa.xxxx needs to be updated
    For more information, see KB 2032822.
    This issue is resolved in this release.
  • Attempts to install ESXi on iSCSI remote LUN might fail
    Attempts to install ESXi on an iSCSI remote LUN might fail with the following error:
    Expecting 2 bootbanks, found 0
    This issue is resolved in this release.
  • When ESXi hosts are booted from SAN disks, applying Host Profile might fail with error in ESXi 5.1.x
    Applying storage Host Profile in ESXi 5.1 might fail when you boot an ESXi host from SAN disks. An error message similar to the following is displayed:
    Host state doesn't match specification: device 'naa.xxxxxxx' parameters needs to be reset
    Host state doesn't match specification: device 'naa.xxxxxx' Path Selection Policy needs to be set to default for claiming SATP
    Specification state absent from host: device naa.XXX parameters needs to be set to Is Perennially Reserved = "false"
    Specification state absent from host: device naa.XXX parameters needs to be set to State = "on" Queue Full Sample Size = "0"
    Queue Full Threshold = "0"...
    Note: The boot LUNs on the two hosts are expected to be identical in attributes like vendor name or model number and claiming SATP for the fix to work satisfactorily.
    This issue is resolved in this release.
  • Host profile might not apply SNMP settings to the target host
    If the value specified for syslocation contains a space on the source host, then the host profile created from the source host might not apply SNMP settings to the target host.
    This issue is resolved in this release.
  • Inconsistent Core Utilization value is reported in the real-time CPU performance chart
    On ESXi hosts with Hyper-Threading enabled, the Core Utilization value reported in the CPU performance chart when viewed through the vSphere Client or the vCenter Server is twice the value reported through esxtop core utilization.
    This issue is resolved in this release.
  • sfcb service might fail to open ESXi firewall for CIM indication delivery if more than one destination listens to the indication on different ports
    The sfcb server can create only one dynamic firewall rule for the port on which the destination listens to the CIM indication. The sfcb server fails to open ESXi firewall for CIM indication delivery if more than one destination listens to the indication on different ports. As a result, only one destination can receive the CIM indication.
    This issue is resolved in this release by creating one firewall rule for each port.
  • ESXCLI commands might fail on Cisco UCS Blades server due to heavy storage load
    ESXCLI commands might fail on Cisco UCS Blades server due to heavy storage load. Error messages similar to the following might be written to the hostd.log file:
    2013-12-13T16:24:57.402Z [3C5C9B90 verbose 'ThreadPool'] usage : total=20 max=62 workrun=18 iorun=2 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I
    2013-12-13T16:24:57.403Z [3C5C9B90 verbose 'ThreadPool'] usage : total=20 max=62 workrun=18 iorun=2 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I
    2013-12-13T16:24:57.404Z [3BEBEB90 verbose 'ThreadPool'] usage : total=21 max=62 workrun=18 iorun=3 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I
    2013-12-13T16:24:58.003Z [3BEBEB90 verbose 'ThreadPool'] usage : total=21 max=62 workrun=18 iorun=3 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I
    2013-12-13T16:24:58.282Z [3C9D4B90 verbose 'ThreadPool'] usage : total=22 max=62 workrun=18 iorun=4 workQ=78 ioQ=0 maxrun=31 maxQ=79 cur=I

    This issue is resolved in this release.
  • Virtual machines do not display the physical host serial number
    Virtual machines do not display the serial numbers of ESXi hosts.
    This issue is resolved in this release.
  • DCUI might become unresponsive while a vim-cmd command is running
    The Direct Console User Interface (DCUI) might become unresponsive if the vim-cmd command that you are running takes a long time to complete.
    This issue is resolved in this release by implementing the timeout mechanism for vim-cmd commands that take a long time to complete.
  • Status change of LSI MegaRaid disk might not be indicated or might be delayed
    When you use LSI SMI-S provider with MegaRaid SAS device driver on an ESXi 5.x host, the status change of LSI MegaRaid disk might not be indicated or might be delayed when you run the enum_instances LSIESG_PhysicalDrive lsi/lsimr13 command.
    The following example indicates that the value of PowerState does not change or changes after a delay when the Power Saving mode of LSI MegaRaid disk is modified:
    LSILSIESG_PhysicalDrive.Tag="500605B004F93CF0_252_36",CreationClassName="LSIESG_PhysicalDrive"
    CreationClassName = LSIESG_PhysicalDrive
    Tag = 500605B004F93CF0_252_36
    UserDataBlockSize = 512
    PiType = 0
    PiEligible = 0
    PiFomatted = 0
    PowerState = 0 ---> No change in status
    Vendor = (NULL)
    FRUNumber = (NULL)
    DiskDrive_DeviceID = 252_36
    This issue is resolved in this release.
  • Performing a compliance check on an ESXi host might result in error message
    When you perform a Host Profile compliance check on an ESXi host, an error message similar to the following might be displayed in the vSphere Client:
    Found extra CIM-XML Indication Subscription on local system for query u'select * from CIM_AlertIndication' sent to destination u'https://IP:port'
    This issue is resolved in this release.
  • ESXi hosts might be disconnected from vCenter Server after Veeam backup is performed
    After you perform Veeam backup, the ESXi hosts might be disconnected from the vCenter Server.
    This issue occurs when Veeam attempts to create a snapshot of the virtual machine.
    Error messages similar to the following are written to the hostd.log file:
    --> Crash Report build=1312873
    --> Signal 11 received, si_code -128, si_errno 0
    --> Bad access at 735F6572

    This issue is resolved in this release.
  • Permissions for an AD user or group might not persist after rebooting the ESXi host
    When you set permissions for an Active Directory (AD) user or group on an ESXi host with Host Profile, the permissions might not persist after you reboot the ESXi host with Auto Deploy.
    This issue is resolved in this release.
  • Hardware Status tab in the vCenter Server might report memory warnings and memory alert messages
    If an Assert or Deassert entry is logged into the IPMI System Event Log (SEL) for the Memory Presence Detected line as part of Memory DIMM detection, the Hardware Status tab in the vCenter Server might report it as memory warnings and memory alert messages.
    This issue is resolved in this release.
  • Applying ESXi 5.1 patch on IBM server the ESXi host might not load the IBM vusb nic
    After applying an ESXi 5.1 patch on the IBM server, the ESXi host might not load the IBM vusb nic. This is because ESXi host does not recognize the vusb device. When you run the Esxcfg-nics l command, the following output is displayed:
    vusb0 Pseudo cdc_ether Up 10Mbps Half 6e:ae:8b:30:1d:53 1500 Unknown Unknown
    This issue is resolved in this release.
  • Management system of SNMP might report incorrect ESXi volume size for large file systems
    When you monitor ESXi using SNMP or management software that relies on SNMP, the management system of SNMP might report incorrect ESXi volume size when it retrieves the volume size for large file systems.
    This issue is resolved in this release. A new switch has been introduced from this release that supports large file systems.

Storage

  • NFS volumes are unmounted from the ESXi host during host reboot
    When you attempt to reboot an ESXi host, if the primary DNS server is unavailable and the secondary server is available, the NFS volumes might not restore due to delay in resolving the NFS server host names (FQDN).
    This issue is resolved in this release.
  • New claim rule option added for IBM
    A new claim rule option reset_on_attempted_reserve has been added to ESXi 5.1 for IBM storage Array Model 2145. For more information, see KB 2008333.
    This issue is resolved in this release.
  • Virtual machines that utilize a PVSCSI adapter might intermittently stop responding
    On an ESXi 5.1 host, virtual machines with disks connected using PVSCSI controllers might stop responding intermittently.
    This issue is observed on ESXi hosts with large number of PCPUs and heavy I/O load.
    This issue is resolved in this release.
  • On an ESXi host, SATA-based SSD devices behind SAS controllers might be displayed incorrectly as non local
    SATA-based SSD devices behind SAS controllers might be marked incorrectly as non local and might affect the virtual flash feature which only considers local SSD devices for vFlash datastore.
    This issue is resolved in this release. All SATA-based SSD devices behind SAS controllers appear as local devices.
  • Error messages related to SCSI Mode sense command failure (0x1a) might be observed in the VMkernel log when ESXi 5.1 host is connected to a SES device
    On an ESXi 5.1 host connected to a SCSI Enclosure device, error messages similar to the following might be logged in the vmkernel.log file after every five minutes:
    2014-03-04T19:45:29.289Z cpu12:16296)NMP: nmp_ThrottleLogForDevice:2319: Cmd 0x1a (0x412440c73140, 0) to dev "mpx.vmhba0:C0:T13:L0" on path "vmhba0:C0:T13:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. Act:NONE
    2014-03-04T19:50:29.290Z cpu2:16286)NMP: nmp_ThrottleLogForDevice:2319: Cmd 0x1a (0x412440c94940, 0) to dev "mpx.vmhba0:C0:T13:L0" on path "vmhba0:C0:T13:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. Act:NONE
    2014-03-04T19:55:29.288Z cpu22:16306)NMP: nmp_ThrottleLogForDevice:2319: Cmd 0x1a (0x412480c51340, 0) to dev
    This issue is resolved in this release.
  • Virtual machine might fail while you perform SCSI I/O operation under memory overload
    On an ESXi host, a virtual machine might fail while you perform SCSI I/O operation. The vmware.log file might contain messages similar to the following:
    LSI: ProcessSDRMessage: Unhandled Doorbell Message 0x2
    ASSERT bora/devices/lsilogic/lsilogic.c:4856 bugNr=56949
    This issue might occur when the virtual machine experiences memory overload.
    This issue is resolved in this release.
  • ESXi 5.x host fails when you use an LSI MegaRAID SAS driver version earlier than 6.506.51.00.1vmw
    ESXi 5.x host fails with a purple diagnostic screen when you use an LSI MegaRAID SAS driver version earlier than 6.506.51.00.1vmw. You see a backtrace similar to the following:
    @BlueScreen: #PF Exception 14 in world 4795:helper31-5 IP 0x41800b4e2eef addr 0xce8
    Code start: 0x41800ae00000 VMK uptime: 156:04:02:21.485
    0x41224aec7eb0:[0x41800b4e2eef]megasas_reset_fusion@#+0x1e stack: 0x0
    0x41224aec7f60:[0x41800b2ea697]vmklnx_workqueue_callout@com.vmware.driverAPI#9.2+0x11a stack: 0x0
    0x41224aec7ff0:[0x41800ae3e129]helpFunc@vmkernel#nover+0x568 stack: 0x0
    0x41224aec7ff8:[0x0] stack: 0x0

    This issue is resolved in this release.
  • Virtual machines might experience slow I/O response
    On an ESXi host where the default I/O scheduler is enabled, if one or more virtual machines utilize the maximum I/O bandwidth of the device for a long time, an IOPS imbalance occurs due to a race condition identified in the ESXi default I/O scheduler.
    This issue is resolved in this release by ensuring uniformity in IOPS across VMs on an ESXi host.
  • Warning message appears in the vmkernel.log file when a vFAT partition is created
    On an ESXi 5.1 Update 2 host, the following warning message is logged in the /var/log/vmkernel.log file when a vFAT partition is created:
    WARNING: VFAT: 4346: Failed to flush file times: Stale file handle
    This issue is resolved in this release.
  • False PE change message might be displayed in the VMkernel log file when you rescan a VMFS datastore with multiple extents
    When you rescan a VMFS datastore with multiple extents, the following log message might be written in the VMkernel log even if no storage-connectivity issues occur:
    Number of PEs for volume changed from 3 to 1. A VMFS volume rescan may be needed to use this volume.
    This issue is resolved in this release.
  • Incorrect virtual disk usage might be reported in Datastore browser view for Eagerzeroedthick Virtual Disks
    Incorrect virtual disk usage might be reported in Datastore browser view for eagerzeroedthick virtual disks with vSphere Client and vSphere Web Client.
    This issue is resolved in this release.
  • Output of esxtop performance data might be displayed as zero
    When the output of esxtop performance data is redirected to a CSV formatted file, the esxtop.csv values collected in batch mode might change to zero. The esxtop.csv file might display I/O values similar to the following:
    "09/04/2013
    22:00:00","1251.43","4.89","7.12","1839.62","7.19","4.99","1273.05","4.97","7.08""09/04/2013
    22:00:10","1283.92","5.02","7.06","1875.14","7.32","4.89","1290.37","5.04","7.07""09/04/2013
    22:00:20","1286.49","5.03","7.03","1914.86","7.48","4.87","1320.55","5.16","6.90""09/04/2013
    22:00:31","1222.56","4.78","7.44","1775.23","6.93","5.21","1253.87","4.90","7.28""09/04/2013
    22:00:41","1269.87","4.96","7.15","1847.40","7.22","4.97","1267.62","4.95","7.13""09/04/2013
    22:00:51","1291.36","5.04","7.05","1857.97","7.26","4.96","1289.40","5.04","7.08""09/04/2013
    22:01:01","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00""09/04/2013
    22:01:11","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00""09/04/2013
    22:01:22","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00""09/04/2013
    22:01:32","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00""09/04/2013
    22:01:42","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00"
    This issue is resolved in this release.
  • During an HA failover or a host crash, the .vswp files of powered ON VMs on that host might be left behind on the storage
    During a High Availability failover or host crash, the .vswp files of powered ON virtual machines on that host might be left behind on the storage. When many such failovers or crashes occur, the storage capacity might become full.
    This issue is resolved in this release.
  • In the software iSCSI protocol, race condition in the send and receive path might cause data corruption
    During continuous read and write I/O operations from virtual machine to an RDM LUN or a virtual disk (VMDK), the read data might sometimes get corrupted. The corruption is due to the race condition in the send and receive path logic.
    This issue is resolved in this release.

Supported Hardware

  • ESXi host might fail with a purple diagnostic screen due to buffer overflow
    On an ESXi 5.1 host, if more than 512 peripherals are connected, the ESXi host might fail with a purple diagnostic screen due to buffer overflow and display an error message similar to the following:
    ASSERT bora/vmkernal/core/vmkapi_device.c:1840
    This issue is resolved in this release. The Device Event Log buffer size is increased to 1024.
  • ESXi host might fail to detect PCI devices located directly under the PCI root bridge
    If PCI devices with memory-mapped I/O greater than 4 GB have their Base Address Registers marked as non-prefetchable, an ESXi 5.1 host might not detect such PCI devices even though they are located directly under the PCI root bridge.
    This issue is resolved in this release.
  • Microsoft Windows 2008 R2 and Solaris 10 64-bit virtual machines might be display a blue diagnostic screen or a kernel panic message
    A blue diagnostic screen or a kernel panic message is displayed when Intel Extended Page Tables (EPT) is enabled on the virtual machines running Microsoft Windows 2008 R2 or Solaris 10 64-bit.
    For more information, see KB2073791.
    This issue is resolved in this release.

Upgrade and Installation

  • Auto Deploy service cannot re-register with vCenter Server after the vCenter Server SSL certificates have changed
    The vSphere Auto Deploy service fails to start after you change the vCenter Server SSL Certificates. The autodeploy.log file contains an entry similar to the following:
    Exception: Server has wrong SHA1 thumbprint
    This issue occurs if you change the vCenter Server SSL certificates after you install Auto Deploy and register it to vCenter Server. Auto Deploy fails to start as the SSL certificate thumbprint stored in the Auto Deploy database no longer matches that of the new vCenter Server SSL certificate. Also, the autodeploy-register command does not work with the new vCenter Server certificate thumbprint.
    This issue is resolved in this release.
  • Attempts to boot ESXi host using stateless cache image might fail with an error message
    You might be unable to boot ESXi host using the stateless cache image when Auto Deploy fails to boot the host.
    An error message similar to the following is displayed when the host attempts to boot using the cached image:
    file not found. Fatal error : 15 (Not found)
    This issue occurs when you upgrade Auto Deploy from ESXi 5.0 to ESXi 5.x and you use the same image in the new Auto Deploy environment.
    This issue is resolved in this release.
  • Performing a scripted installation of ESXi 5.1 or later might install ESXi on the SSD instead of the local drive
    Performing a scripted installation of ESXi 5.1 with the install firstdisk=local command might install ESXi on the SSD instead of the selected local drive. To avoid ESXi installation on SSD, the firstdisk=local --ignoressd switch is included in the command.
    Note: When the --ignoreSSD command is used with autopartition=TRUE, the SSD is left unformatted.
    This issue is resolved in this release.
  • nsswitch.conf might be empty when you upgrade hosts from ESX to ESXi
    When you upgrade the host from ESX to ESXi, the migration of the nsswitch.conf file might not be proper. As a result, the nsswitch.conf file might be empty.
    This issue is resolved in this release.
  • Error messages are obeserved on the booting screen when ESXi 5.1 host boots from autodeploy stateless caching
    Error message with tracebacks is observed on the booting screen when ESXi 5.1 host boots from autodeploy stateless caching. This error is due to an unexpected short length message which is less than four characters in the syslog network.py script.
    An error message similar to the following is reported:
    IndexError: string index out of range
    This issue is resolved in this release.

vCenter Server, vSphere Client, and vSphere Web Client

  • The CPU usage displayed in vCenter performance chart and when you run the esxtop command are contradictory
    High CPU usage might be displayed in vCenter performance chart whereas such high values might not be displayed when you run the esxtop command with the -c option at the same time. This issue occurs only if Hyper-Threading is enabled on the ESXi host.
    This issue is resolved in this release.
  • Hostd process might fail due to the virtual machine POWER ON or POWER OFF or Unregister operation and as a result the ESXi host might get disconnected from the vCenter Server
    After the hostd process fails, the ESXi host might get disconnected from the vCenter Server and might not be able to reconnect. This issue occurs when the VmStateListener::VmPowerStateListener function calls the VirtualMachineImpl::GetSummary function after the virtual machine is unregistered. Since this causes Managed Object not found exception, hostd fails and the ESXi host gets disconnected from the vCenter Server.
    This issue is resolved in this release.
  • Average CPU usage values might be greater than the frequency of the processors multiplied by the number of processors
    The average CPU usage values displayed by PowerCLI might be greater than the value obtained by multiplying the frequency of the processors with the number of processors.
    This issue is resolved in this release by setting the maximum limit of the average CPU usage values correctly.

Virtual Machine Management

  • Virtual machine might become unresponsive after you hot-add CPUs to the virtual machine
    On an ESXi 5.1 host, Windows Server 2008 R2 virtual machine might become unresponsive after you hot-add CPUs to the virtual machine.
    This issue is resolved in this release.
  • ESXi host might fail with a purple screen when you run custom scripts that use the AdvStats parameter
    ESXi host might fail with a purple screen when you run custom scripts using the AdvStats parameter to check the disk usage. An error message similar to the following might be written to vmkernel.log file:
    VSCSI: 231: Creating advStats for handle 8192, vscsi0:0
    The host reports a backtrace similar to:
    Histogram_XXX
    VSCSIPostIOCompletion
    AsyncPopCallbackFrameInt
    This issue is resolved in this release.
  • Virtual machine might fail while creating a quiesced snapshot
    Virtual machine might fail when vSphere Replication or other services initiate a quiesced snapshot.
    This issue is resolved in this release.
  • The guest operating system might fail to respond when you repeatedly take snapshots
    The guest operating system might fail to respond when guest operating system generates data faster than the consolidate rate. For example, asynchronous consolidate starts with a 5 minute run, then goes to 10 minutes, 20 minutes, 30 minutes, and so on. After 9 iterations, it goes to 60 minutes per cycle. During these attempts, consolidate is performed without stunning the virtual machine. After maximum iterations, a synchronous consolidate is forced where the virtual machine is stunned and a consolidation is performed.
    You see entries similar to the following in vmware.log when the guest operating system fails to respond:
    SnapshotVMXNeedConsolidateIteration: Took maximum permissible helper snapshots, performing synchronous consolidate of current disk.
    This issue is resolved by introducing a new config option snapshot.asyncConsolidate.forceSync = "FALSE" to disable synchronous consolidate and allow virtual machine to continue to run even after exceeding maximum asynchronous consolidate iterations. The new option causes consolidation failure if a synchronous consolidation is required.
  • ESXi host might be disconnected from the vCenter Server when hostd fails
    When you use the vim.VirtualMachine.reconfigure interface on a vCenter Server to register or change virtual machine configuration, the hostd might fail with a message similar to the following:
    2014-01-19T16:29:49.995Z [26241B90 info 'TaskManager' opID=ae6bb01c-d1] Task Created : haTask-674- vim.VirtualMachine.reconfigure-222691190 Section for VMware ESX, pid=487392, version=5.0.0, build=build-821926, option=Release
    2014-01-19T16:33:02.609Z [23E40B90 info 'ha-eventmgr'] Event 1 : /sbin/hostd crashed (1 time(s) so far) and a core file might have been created at /var/core/hostd-worker-zdump.000.

    This issue is resolved in this release.

vMotion and Storage vMotion

  • Storage vMotion of a virtual machine with more than 10 snapshots might fail with an error message
    Even if the destination datastore has sufficient storage space, Storage vMotion of a virtual machine with more than 10 snapshots might fail with the following error message:
    Insufficient space on the datastore <datastore name>.
    This issue is resolved in this release.
  • Attempts to perform live Storage vMotion of virtual machines with RDM disks might fail
    Storage vMotion of virtual machines with RDM disks might fail and cause virtual machines to be powered off. Attempts to power on the virtual machine might fail with the following error message:
    Failed to lock the file
    This issue is resolved in this release.
  • Performing vMotion operation might cause VMware Tools to auto-upgrade and virtual machines to reboot
    VMware Tools might auto-upgrade and virtual machines might reboot if you enable upgrading of VMware Tools on power cycle and then perform vMotion from an ESXi host with a no-tools image profile to another ESXi host with the newer VMware Tools ISO image.
    This issue is resolved in this release.
  • Migration of a powered off virtual machine with RDM disks might fail with an error message
    Attempts to migrate a powered off virtual machine with RDM disks might fail with the following error message if you convert the RDM disk to a thin provisioned disk:
    Incompatible device backing specified for device '0'.
    This issue is resolved in this release.
  • During vMotion, the hostd process might fail and the virtual machine might not power on
    Virtual machine power on operations that occur as a result of vMotion or Storage vMotion might fail if both the ctkEnabled and writeThrough parameters are enabled.
    This issue is resolved in this release.
  • Page fault might occur while using XvMotion feature on ESXi hosts
    A page fault might occur when you enable all optimizations while using the XvMotion feature on ESXi hosts with a datastore of block size 1MB. A purple screen with a message similar to the following is displayed:
    PSOD @BlueScreen: #PF Exception 14 in world 1000046563:helper14-2 IP 0x4180079313ab addr 0x4130045c3600
    This issue is resolved in this release.
  • Cancelling Storage vMotion might cause virtual machine to reboot abruptly
    When you cancel Storage vMotion task, the virtual machine might become unresponsive and reboot unexpectedly.
    This issue is resolved in this release.

VMware HA and Fault Tolerance

  • Powering on a virtual machine fails with an invalid configuration error
    Powering on a virtual machine after a complete power outage for all hosts in a cluster, might fail and result in an error message similar to the following:
    Invalid Configuration for Device 0
    This issue is resolved in this release.

VMware Tools

  • Windows guest operating system installed with VMware Tools and with multiple RDP connections might display warning messages in event viewer
    After you install VMware Tools, if you attempt to use RDP to connect to a Windows virtual machine, some of the plug-ins might write a warning message to the Windows event log. The warning message indicates the failure to send remote procedure calls to the host.
    This issue is resolved in this release.
  • vmtoolsd might fail on a Linux virtual machine during shutdown
    On a Linux virtual machine, the VMware Tools service vmtoolsd might fail when you shut down the guest operating system.
    This issue is resolved in this release.
  • Installing VMware Tools on a Spanish locale Windows guest operating system might cause Windows Event Viewer to display a warning message
    After you install VMware Tools, the Windows Event Viewer displays a warning similar to the following:
    Unable to read a line from 'C:\Program Files\VMware\VMware Tools\messages\es\hgfsUsability.vmsg': Invalid byte sequence in conversion input.
    This issue is particularly noticed when you install VMware Tools on a Spanish locale operating system.
    This issue is resolved in this release.
  • Unable to open telnet on Windows 8 or Windows Server 2012 guest operating system after installing VMware Tools
    After installing VMware Tools on Windows 8 or Windows Server 2012 guest operating system, attempts to open telnet using the start telnet://xx.xx.xx.xx command fail with the following error message:
    Make sure the virtual machine's configuration allows the guest to open host applications
    This issue is resolved in this release.
  • Virtual machines with Linux guest operating system might report a kernel panic error when you install VMware Tools
    When you install VMware Tools on a Linux guest operating system with multiple kernels, the virtual machine might report a kernel panic error and stop responding at boot time.
    This issue is observed on Linux virtual machines running kernel version 2.6.13 or later and occurs when you run vmware-config-tools.pl to reconfigure VMware Tools for another kernel.
    This issue is resolved in this release.
  • Errors while updating VMware Tools on RHEL 6.2 using RHEL OSP RPM
    An error message similar to the following might be displayed while updating an earlier version of VMware Tools to version 9.0.5 on a RHEL 6.2 virtual machine using RHEL OSP RPM from http://packages.vmware.com/tools:
    Error: Package: kmod-vmware-tools-vsock-9.3.3.0-2.6.32.71.el6.x86_64.3.x86_64 (RHEL6-isv)
    Requires: vmware-tools-vsock-common = 9.0.1
    Installed: vmware-tools-vsock-common-9.0.5-1.el6.x86_64 (@/vmware-tools-vsock-common-9.0.5-1.el6.x86_64)
    vmware-tools-vsock-common = 9.0.5-1.el6
    Available: vmware-tools-vsock-common-8.6.10-1.el6.x86_64 (RHEL6-isv)
    vmware-tools-vsock-common = 8.6.10-1.el6
    Available: vmware-tools-vsock-common-9.0.1-3.x86_64 (RHEL6-isv)
    vmware-tools-vsock-common = 9.0.1-3
    This issue is resolved in this release.
  • Some of the drivers might not work as expected on Solaris 11 virtual machine
    On an ESXi 5.1 host, some of the drivers installed on Solaris 11 guest operating system might be from Solaris 10. As a result, the drivers might not work as expected.
    This issue is resolved in this release.
  • Performing traceroute command on ESXi5.1 host might fail with a message stating multiple interfaces found
    When you perform traceroute command on ESXi5.1 host, the traceroute option -i might fail with a warning message.
    A warning message similar to the following is displayed:
    Warning: Multiple interfaces found (Inconsistent behavior through ESXi)
    This issue is resolved in this release.
  • Upgrading to VMware Tools 5.x might cause log spew in the VMX log file
    When two or more users log in to the graphical console either from the local or from a remote terminal on a Windows or Linux guest operating system, upgrading to VMware Tools 5.x might cause several log entries similar to the following to be logged to the VMX log file:
    Error in the RPC receive loop: RpcIn: Unable to send
    This issue is resolved in this release.
  • Attempts to upgrade VMware Tools on a Windows 2000 virtual machine might fail
    Attempts to upgrade VMware Tools on a Windows 2000 virtual machine might fail. An error message similar to the following is written to vmmsi.log:  
    Invoking remote custom action. DLL: C:\WINNT\Installer\MSI12.tmp, Entrypoint: VMRun
    VM_CacheMod. Return value 3.
    PROPERTY CHANGE: Deleting RESUME property. Its current value is '1'.
    INSTALL. Return value 3.
     
    This issue is resolved in this release.
  • Process Explorer displays incorrect handle count
    When the HGFS module transfers a large number of files or transfers files that are large in size between an ESXi host running the vSphere Client and the console of the guest operating system, the Process Explorer displays an incorrect handle count due to handle leak.
    This issue is resolved in this release.
  • Attempts to cancel or stop install process after VMware Tools install after tools are installed in Linux or FreeBSD or Solaris guest might fail
    After a successful installation of VMware Tools on Linux or FreeBSD or Solaris virtual machines, VMware Tools installation might still show as in progress. Attempts to stop the VMware Tools installation on the virtual machine might fail.
    This issue is resolved in this release.

Known Issues

Known issues not previously documented are marked with the * symbol. The known issues are grouped as follows.

Backup Issues

  • Changed Block Tracking is reset for virtual RDM disks during cold migration*
    During migration of a powered off virtual machine, Change Block Tracking (CBT) is reset for virtual RDM disks.

    Note: This issue does not occur if the virtual machine is powered on.

Installation and Upgrade Issues

  • Inventory objects might not be visible after upgrading a vCenter Server Appliance configured with Postgres database
    When a vCenter Server Appliance configured with Postgres database is upgraded from 5.0 Update 2 to 5.1 Update 1, inventory objects such as datacenters, vDS and so on that existed before the upgrade might not be visible. This issue occurs when you use vSphere Web Client to connect to vCenter Server appliance.

    Workaround: Restart the Inventory service after upgrading vCenter Server Appliance.

  • For Auto Deploy Stateful installation, cannot use firstdisk argument of ESX on systems that have ESX/ESXi already installed on USB
    You configure the host profile for a host that you want to set up for Stateful Install with Auto Deploy. As part of configuration, you select USB as the disk, and you specify esx as the first argument. The host currently has ESX/ESXi installed on USB. Instead of installing ESXi on USB, Auto Deploy installs ESXi on the local disk.

    Workaround: None.

  • Auto Deploy PowerCLI cmdlets Copy-DeployRule and Set-DeployRule require object as input
    When you run the Copy-DeployRule or Set-DeployRule cmdlet and pass in an image profile or host profile name, an error results.

    Workaround: Pass in the image profile or host profile object.

  • Applying host profile that is set up to use Auto Deploy with stateless caching fails if ESX is installed on the selected disk
    You use host profiles to set up Auto Deploy with stateless caching enabled. In the host profile, you select a disk on which a version of ESX (not ESXi) is installed. When you apply the host profile, an error that includes the following text appears.
    Expecting 2 bootbanks, found 0

    Workaround: Remove the ESX software from the disk, or select a different disk to use for stateless caching.

  • vSphere Auto Deploy no longer works after a change to the IP address of the machine that hosts the Auto Deploy server
    You install Auto Deploy on a different machine than the vCenter Server, and change the IP address of the machine that hosts the Auto Deploy server. Auto Deploy commands no longer work after the change.

    Workaround: Restart the Auto Deploy server service.
    net start vmware-autodeploy-waiter
    If restarting the service does not resolve the issue, you might have to reregister the Auto Deploy server. Run the following command, specifying all options.
    autodeploy-register.exe -R -a vCenter-IP -p vCenter-Port -u user_name -w password -s setup-file-path

  • On HP DL980 G7, ESXi hosts do not boot through Auto Deploy when onboard NICs are used
    You cannot boot an HP DL980 G7 system using Auto Deploy if the system is using the onboard (LOM Netxen) NICs for PXE booting.

    Workaround: Install an add-on NIC approved by HP on the host, for example HP NC3 60T and use that NIC for PXE booting.

  • A live update with esxcli fails with a VibDownloadError
    A user performs two updates in sequence, as follows.

    1. A live install update using the esxcli software profile update or esxcli vib update command.
    2. A reboot required update.

    The second transaction fails. One common failure is signature verification, which can be checked only after the VIB is downloaded.

    Workaround: Resolving the issue is a two-step process.

    1. Reboot the ESXi host to clean up its state.
    2. Repeat the live install.

  • ESXi scripted installation fails to find the kickstart (ks) file on a CD-ROM drive when the machine does not have any NICs connected
    When the kickstart file is on a CD-ROM drive in a system that does not have any NICs connected, the installer displays the error message: Can't find the kickstart file on cd-rom with path <path_to_ks_file>.

    Workaround: Reconnect the NICs to establish network connection, and retry the installation.

  • Scripted installation fails on the SWFCoE LUN
    When the ESXi installer invokes installation using the kickstart (ks) file, all the FCoE LUNs have not yet been scanned and populated by the time installation starts. This causes the scripted installation on any of the LUNs to fail. The failure occurs when the https, http, or ftp protocol is used to access the kickstart file.

    Workaround: In the %pre section of the kickstart file, include a sleep of two minutes:
    %pre --interpreter=busybox
    sleep 120

  • Potential problems if you upgrade vCenter Server but do not upgrade Auto Deploy server
    When you upgrade vCenter Server, vCenter Server replaces the 5.0 vSphere HA agent (vmware-fdm) with a new agent on each ESXi host. The replacement happens each time an ESXi host reboots. If vCenter Server is not available, the ESXi hosts cannot join a cluster.

    Workaround: If possible, upgrade the Auto Deploy server.
    If you cannot upgrade the Auto Deploy server, you can use Image Builder PowerCLI cmdlets included with vSphere PowerCLI to create an ESXi 5.0 image profile that includes the new vmware-fdm VIB. You can supply your hosts with that image profile.

    1. Add the ESXi 5.0 software depot and add the software depot that contains the new vmware-fdm VIB.
      Add-EsxSoftwareDepot C:\Path\VMware-Esxi-5.0.0-buildnumber-depot.zip Add-EsxSoftwareDepot http://vcenter server/vSphere-HA-depot
    2. Clone the existing image profile and add the vmware-fdm VIB.
      New-EsxImageProfile -CloneProfile "ESXi-5.0.0-buildnumber-standard" -name "Imagename" Add-EsxSoftwarePackage -ImageProfile "ImageName" -SoftwarePackage vmware-fdm
    3. Create a new rule that assigns the new image profile to your hosts and add the rule to the ruleset.
      New-DeployRule -Name "Rule Name" -Item "Image Name" -Pattern "my host pattern" Add-DeployRule -DeployRule "Rule Name"
    4. Perform a test and repair compliance operation for the hosts.
      Test-DeployRuleSetCompliance Host_list

  • If Stateless Caching is turned on, and the Auto Deploy server becomes unavailable, the host might not automatically boot using the stored image
    In some cases, a host that is set up for stateless caching with Auto Deploy does not automatically boot from the disk that has the stored image if the Auto Deploy server becomes unavailable. This can happen even if the boot device that you want is next in logical boot order. What precisely happens depends on the server vendor BIOS settings.

    Workaround: Manually select the disk that has the cached image as the boot device.

  • During upgrade of ESXi 5.0 hosts to ESXi 5.1 with ESXCLI, VMotion and Fault Tolerance (FT) logging settings are lost
    On an ESXi 5.0 host, you enable vMotion and FT for a port group. You upgrade the host by running the command esxcli software profile update. As part of a successful upgrade, the vMotion settings and the logging settings for Fault Tolerance are returned to the default settings, that is, disabled.

    Workaround: Use vSphere Upgrade Manager to upgrade the hosts, or return vMotion and Fault Tolerance to their pre-upgrade settings manually.

Networking Issues
  • On an SR-IOV enabled ESXi host, virtual machines associated with virtual functions might fail to start
    When SR-IOV is enabled on ESXi 5.1 hosts with Intel ixgbe NICs and if several virtual functions are enabled in this environment, some virtual machines might fail to start.
    Messages similar to the following are displayed in the vmware.log file:
    2013-02-28T07:06:31.863Z| vcpu-1| I120: Msg_Post: Error
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ PCIPassthruChangeIntrSettings: 0a:17.3 failed to register interrupt (error code 195887110)
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5122262e-ab950f8e-cd4f-b8ac6f917d68/VMLibRoot/VMLib-RHEL6.2-64-HW7-default-3-2-1361954882/vmwar
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support.
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86]
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support".
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement.

    Workaround: Reduce the number of virtual functions associated with the affected virtual machine and start it.

  • System stops responding during TFTP/HTTP transfer when provisioning ESXi 5.1 or 5.0 U1 with Auto Deploy
    When provisioning ESXi 5.1 or 5.0 U1 with Auto Deploy on Emulex 10GbE NC553i FlexFabric 2 Ports using the latest open-source gPXE, the system stops responding during TFTP/HTTP transfer.

    Emulex 10GbE PCI-E controllers are memory-mapped controllers. The PXE/UNDI stack running on this controller must switch to big real mode from real mode during the PXE TFTP/HTTP transfer to program the device-specific registers located above 1MB in order to send and receive packets through the network. During this process, CPU interrupts are inadvertently enabled, which causes the system to stop responding when other device interrupts are generated during the CPU mode switching.

    Workaround: Upgrade the NIC firmware to build 4.1.450.7 or later.

  • Changes to the number of ports on a standard virtual switch do not take effect until host is rebooted
    When you change the number of ports on a standard virtual switch, the changes do not take effect until you reboot the host. This differs from the behavior with a distributed virtual switch, where changes to the number of ports take effect immediately.

    When changing the number of ports on a standard virtual switch, ensure that the total number of ports on the host, from both standard and distributed switches, does not exceed 4096.

    Workaround: None.

  • Administrative state of a physical NIC not reported properly as down
    Administratively setting a physical NIC state to down does not conform to IEEE standards. When a physical NIC is set to down through the virtual switch command, it causes two known problems:
    • ESXi experiences a traffic increase it cannot handle that wastes network resources at the physical switch fronting ESXi and in resources in ESXi itself.

    • The NIC behaves in an unexpected way. Operators expect to see the NIC powered down, but the NIC displays as still active.

    VMware recommends using the using ESXCLI network down -n vmnicN command with the following caveats:
    • This command turns off the driver only. It does not power off the NIC. When the ESXi physical network adapter is viewed from the management interface of the physical switch fronting the ESXi system, the standard switch uplink still appears to be active.

    • The administrative state of a NIC is not visible in the ESXCLI or UI. You must remember when debugging to check the state by examining /etc/vmware/esx.conf.

    • The SNMP agent report administrative state, however it will report incorrectly if the NIC was set to down when the operational state was down to begin with. It reports the admin state correctly if the NIC was set to down when the operational state was active.

    Workaround: Change the administrative state on the physical switch fronting the ESXi system to down instead of using the virtual switch command.

  • Linux driver support changes
    Device drivers for VMXNET2 or VMXNET (flexible) virtual NICs are not available for virtual machines running Linux kernel version 3.3 and later.

    Workaround: Use a VMXNET3 or e1000 virtual NIC for virtual machines running Linux kernel version 3.3 and later.

  • vSphere 5.0 network I/O control bandwidth allocation is not distributed fairly across multiple uplinks
    In vSphere 5.0, if a networking bandwidth limit is set on a resource pool while using network I/O control, this limit is enforced across a team of uplinks at the host level. This bandwidth cap is implemented by a token distribution algorithm that is not designed to fairly distribute bandwidth between multiple uplinks.

    Workaround: vSphere 5.1 network I/O control limits have been narrowed to a per uplink basis.

  • Mirrored Packet Length setting could cause a remote mirroring source session not to function
    When you configure a remote mirroring source session with the Mirrored Packet Length option set, the destination does not receive some mirrored packets. However, if you disable the option, packets are again received.
    If the Mirrored Packet Length option is set, packets longer than the specified length are truncated and packets are dropped. Lower layer code will not do fragmentation and recalculate checksum for the dropped packets. Two things might cause packets to drop:

    • The Mirrored Packet Length is greater than the maximum transmission unit (MTU)
      If TSO is enabled in your environment, the original packets could be very large. After being truncated by the Mirrored Packet Length, they are still larger than the MTU, so they are dropped by the physical NIC.

    • Intermediate switches perform L3 check
      Some truncated packets can have the wrong packet length and checksum. Some advanced physical switches check L3 information and drop invalid packets. The destination does not receive the packets.

Workaround:

    • If TCP Segmentation Offload (TSO) is enabled, disable the Mirrored Packet Length option.

    • You can enable or disable L3 check on some switches, such as Cisco's 4500 series switch. If these switches are in use, disable the L3 check. For switches that cannot be configured, disable the Mirrored Packet Length option.
  • Enabling more than 16 VMkernel network adapters causes vMotion to fail
    vSphere 5.x has a limit of 16 VMkernel network adapters enabled for vMotion per host. If you enable more than 16 VMkernel network adapters for vMotion on a given host, vMotion migrations to or from that host might fail. An error message in the VMkernel logs on ESXi says Refusing request to initialize 17 stream ip entries, where the number indicates how many VMkernel network adapters you have enabled for vMotion.

    Workaround: Disable vMotion VMkernel network adapters until only a total of 16 are enabled for vMotion.

  • vSphere network core dump does not work when using a nx_nic driver in a VLAN environment
    When network core dump is configured on a host that is part of a VLAN, network core dump fails when the NIC uses a QLogic Intelligent Ethernet Adapters driver (nx_nic). Received network core dump packets are not tagged with the correct VLAN tag if the uplink adapter uses nx_nic.

    Workaround: Use another uplink adapter with a different driver when configuring network coredump in a VLAN.

  • If the kickstart file for a scripted installation calls a NIC already in use, the installation fails
    If you use a kickstart file to set up a management network post installation, and you call a NIC that is already in use from the kickstart file, you see the following error message: Sysinfo error on operation returned status: Busy. Please see the VMkernel log for detailed error information.

    The error is encountered when you initiate a scripted installation on one system with two NICs: a NIC configured for SWFCoE/SWiSCSI, and a NIC configured for networking. If you use the network NIC to initiate the scripted installation by providing either netdevice=<nic> or BOOTIF=<MAC of the NIC> at boot-options, the kickstart file uses the other NIC, netdevice=<nic configured for SWFCoE / SWiSCSI>, in the network line to configure the management network.

    Installation (partitioning the disks) is successful, but when the installer tries to configure the management-network for the host with the network parameters provided in the kickstart file, it fails because the NIC was in use by SWFCoE/SWiSCSI.

    Workaround: Use an available NIC in the kickstart file for setting up a management network after installation.

  • Virtual machines running ESX that also use VMXNET3 as the pNIC might crash
    Virtual machines running ESX as a guest that also use VMXNET3 as the pNIC might crash because support for VMXNET3 is experimental. The default NIC for an ESX virtual machine is e1000, so this issue is encountered only when you override the default and choose VMXNET3 instead.

    Workaround: Use e1000 or e1000e as the pNIC for the ESX virtual machine.

  • Error message is displayed when a large number of dvPorts is in use
    When you power on a virtual machine with dvPort on a host that already has a large number of dvPorts in use, an Out of memory or Out of resources error is displayed. This can also occur when you list the switches on a host using an esxcli command.

    Workaround: Increase the dvsLargeHeap size.

    1. Change the host's advanced configuration option:
      • esxcli command: esxcfg-advcfg -s /Net/DVSLargeHeapMaxSize 100
      • Virtual Center: Browse to Host configuration -> Software Panel -> Advanced Settings -> Under "Net", change the DVSLargeHeapMaxSize value from 80 to 100.
      • vSphere 5.1 Web Client: Browse to Manage host -> Settings -> Advanced System Settings -> Filter. Change the DVSLargeHeapMaxSize value from 80 to 100.
    2. Capture a host profile from the host. Associate the profile with the host and update the answer file.
    3. Reboot the host to confirm the value is applied.

    Note: The max value for /Net/DVSLargeHeapMaxSize is 128.

    Please contact VMware Support if you face issues during a large deployment after changing /Net/DVSLargeHeapMaxSize to 128 and logs display either of the following error messages:

    Unable to Add Port; Status(bad0006)= Limit exceeded

    Failed to get DVS state from vmkernel Status (bad0014)= Out of memory

  • ESXi fails with Emulex BladeEngine-3 10G NICs (be2net driver)
    ESXi might fail on systems that have Emulex BladeEngine-3 10G NICs when a vCDNI-backed network pool is configured using VMware vCloud Director. You must obtain an updated device driver from Emulex when configuring a network pool with this device.

    Workaround: None.

Storage Issues

  • RDM LUNs get detached from virtual machines that migrate from VMFS datastore to NFS datastore
    If you use the vSphere Web Client to migrate virtual machines with RDM LUNs from VMFS datastore to NFS datastore, the migration operation completes without any error or warning messages, but the RDM LUNs get detached from the virtual machine after migration. However, the migration operation creates a vmdk file with size same as that of RDM LUN on NFS datastore, to replace the RDM LUN.
    If you use vSphere Client, an appropriate error message is displayed in the compatibility section of the migration wizard.

    Workaround: None
  • VMFS5 datastore creation might fail when you use an EMC Symmetrix VMAX/VMAXe storage array
    If your ESXi host is connected to a VMAX/VMAXe array, you might not be able to create a VMFS5 datastore on a LUN presented from the array. If this is the case, the following error will appear: An error occurred during host configuration. The error is a result of the ATS (VAAI) portion of the Symmetrix Enginuity Microcode (VMAX 5875.x) preventing a new datastore on a previously unwritten LUN.

    Workaround:

    1. Disable Hardware Accelerated Locking on the ESXi host.
    2. Create a VMFS5 datastore.
    3. Reenable Hardware Accelerated Locking on the host.

    Use the following tasks to disable and reenable the Hardware Accelerated Locking parameter.

    In the vSphere Web Client

    1. Browse to the host in the vSphere Web Client navigator.
    2. Click the Manage tab, and click Settings.
    3. Under System, click Advanced System Settings.
    4. Select VMFS3.HardwareAcceleratedLocking and click the Edit icon.
    5. Change the value of the VMFS3.HardwareAcceleratedLocking parameter:
      • 0 disabled
      • 1 enabled

    In the vSphere Client

    1. In the vSphere Client inventory panel, select the host.
    2. Click the Configuration tab, and click Advanced Settings under Software.
    3. Change the value of the VMFS3.HardwareAcceleratedLocking parameter:
      • 0 disabled
      • 1 enabled

  • Attempts to create a GPT partition on a blank disk might fail when using Storagesystem::updateDiskPartitions()
    You can use the Storagesystem::computeDiskPartitionInfo API to retrieve disk specification, and then use the disk specification to label the disk and create a partition with Storagesystem::updateDiskPartitions(). However, if the disk is initially blank and the target disk format is GPT, your attempts to create the partition might fail.

    Workaround: Use DatastoreSystem::createVmfsDatastore instead to label and partition a blank disk, and to create a VMFS5 datastore.

  • Attempts to create a diagnostic partition on a GPT disk might fail
    If a GPT disk has no partitions, or the tailing portion of the disk is empty, you might not be able to create a diagnostic partition on the disk.

    Workaround: Avoid using GPT-formatted disks for diagnostic partitions. If you must use an existing blank GPT disk for the diagnostic partition, convert the disk to the MBR format.

    1. Create a VMFS3 datastore on the disk.
    2. Remove the datastore.

    The disk format changes from GPT to MBR.

  • ESXi cannot boot from a FCoE LUN that is larger than 2TB and accessed through an Intel FCoE NIC
    When you install ESXi on a FCoE boot LUN that is larger than 2TB and is accessed through an Intel FCoE NIC, the installation might succeed. However, when you attempt to boot your ESXi host, the boot fails. You see the error messages: ERROR: No suitable geometry for this disk capacity! and ERROR: Failed to connect to any configured disk! at BIOS time.

    Workaround: Do not install ESXi on a FCoE LUN larger than 2TB if it is connected to the Intel FCoE NIC configured for FCoE boot. Install ESXi on a FCoE LUN that is smaller than 2TB.

Server Configuration Issues

  • Host Profiles compliance check might fail after you upgrade to ESX 5.1 Update 3*
    After you upgrade ESXi 5.1.x to ESXi 5.1 Update 3, the Native Multipathing Plugin (NMP) device information might not get included in the host profile compliance check. The host profile compliance check might fail for all the existing host profiles. The following compliance error message is displayed:
    Specification state absent from host: SATP configuration for device naa.xxxx needs to be updated

    Workaround: To work around this issue, see KB 2032822.
  • Applying host profiles might fail when accessing VMFS folders through console
    If a user is accessing the VMFS datastore folder through the console at the same time a host profile is being applied to the host, the remediation or apply task might fail. This failure occurs when stateless caching is enabled on the host profile or if an auto deploy installation occurred.

    Workaround: Do not access the VMFS datastore through the console while remediating the host profile.

  • Leading white space in login banner causes host profile compliance failure
    When you edit a host profile and change the text for the Login Banner (Message of the Day) option, but add a leading white space in the banner text, a compliance error occurs when the profile is applied. The compliance error Login banner has been modified appears.

    Workaround: Edit the host profile and remove the leading white space from the Login Banner policy option.

  • Host profile extracted from ESXi 5.0 host fails to apply to ESX 5.1 host with Active Directory enabled
    When applying a host profile with Active Directory enabled that was originally extracted from an ESXi 5.0 host to an ESX 5.1 host, the apply task fails. Setting the maximum memory size for the likewise system resource pool might cause an error to occur. When Active Directory is enabled, the services in the likewise system resource pool consume more than the default maximum memory limit for ESXi 5.0 captured in an ESXi 5.0 host profile. As a result, applying an ESXi 5.0 host profile fails during attempts to set the maximum memory limit to the ESXi 5.0 levels.

    Workaround: Perform one of the following:

    • Manually edit the host profile to increase the maximum memory limit for the likewise group.
      1. From the host profile editor, navigate to the Resource Pool folder, and view host/vim/vmvisor/plugins/likewise.
      2. Modify the Maximum Memory (MB) setting from 20 (the ESXi 5.0 default) to 25 (the ESXi 5.1 default).
    • Disable the subprofile for the likewise group. Do one of the following:
      • In the vSphere Web Client, edit the host profile and deselect the checkbox for the Resource Pool folder. This action disables all resource pool management. You can disable this specifically for the host/vim/vmvisor/plugins/likewise item under the Resource Pool folder.
      • In the vSphere Client, right-click the host profile and select Enable/Disable Profile Configuration... from the menu.

  • Host gateway deleted and compliance failures occur when ESXi 5.0.x host profile re-applied to stateful ESXi 5.1 host
    When an ESXi 5.0.x host profile is applied to a freshly installed ESXi 5.1 host, the profile compliance status is noncompliant. After applying the same profile again, it deletes the host's gateway IP and the compliance status continues to show as noncompliant with the IP route configuration doesn't match the specification status message.

    Workaround: Perform one of the following workarounds:

    • Login to the host through DCUI and add the default gateway manually with the following esxcli command:
      esxcli network ip route ipv4 add --gateway xx.xx.xx.xx --network yy.yy.yy.yy
    • Extract a new host profile from the ESX 5.1 host after applying the ESX 5.0 host profile once. Migrate the ESX 5.1 host to the new ESX 5.1-based host profile.

  • Compliance errors might occur after stateless caching enabled on USB disk
    When stateless caching to USB disks is enabled on a host profile, compliance errors might occur after remediation. After rebooting the host so that the remediated changes are applied, the stateless caching is successful, but compliance failures continue.

    Workaround: No workaround is available.

  • Hosts with large number of datastores time out while applying host profile with stateless caching enabled
    A host that has a large number of datastores times out when applying a host profile with stateless caching enabled.

    Workaround: Use the vSphere Client to increase the timeout:

    1. Select Administrator > vCenter Server Settings.
    2. Select Timeout Settings.
    3. Change the values for Normal Operations and Long Operations to 3600 seconds.

  • Cannot extract host profile from host when IPv4 is disabled on vmknics
    If you remove all IPv4 addresses from all vmknics, you cannot extract a host profile from that host. This action affects hosts provisioned with auto-deploy the most, as host profiles is the only way to save the host configuration in this environment.

    Workaround: Assign as least one vmknic to one IPv4 address.

  • Applying host profile fails when applying a host profile extracted from an ESXi 4.1 host on an ESXi 5.1 host
    If you set up a host with ESXi 4.1, extract a host profile from this host (with vCenter Server), and attempt to attach a profile to an ESXi 5.1 host, the operation fails when you attempt to apply the profile. You might receive the following error: NTP service turned off.

    The NTPD service could be running (on state) even without providing an NTP server in /etc/ntp.conf for ESXi 4.1. ESXi 5.1 needs an explicit NTP server for the service to run.

    Workaround: Turn on the NTP service by adding a valid NTP server in /etc/ntp.conf and restart the NTP daemon on the 5.1 host. Confirm that the service persists after the reboot. This action ensures the NTP service is synched for the host and the profile being applied to it.

  • Host profile shows noncompliant after profile successfully applied
    This problem occurs when extracting a host profile from an ESXi 5.0 host and applying it to an ESXi 5.1 host that contains a local SAS device. Even when the host profile remediation is successful, the host profile compliance shows as noncompliant.

    You might receive errors similar to the following:

    • Specification state absent from host: device naa.500000e014ab4f70 Path Selection Policy needs to be set to VMW_PSP_FIXED
    • Specification state absent from host: device naa.500000e014ab4f70 parameters needs to be set to State = on" Queue Full Sample Size = "0" Queue Full Threshold = "0"

    The ESXi 5.1 host profile storage plugin filters out local SAS device for PSA and NMP device configuration, while ESXi 5.0 contains such device configurations. This results in a missing device when applying the older host profile to a newer host.

    Workaround: Manually edit the host profile, and remove the PSA and NMP device configuration entries for all local SAS devices. You can determine if a device is a local SAS by entering the following esxcli command:
    esxcli storage core device list

    If the following line is returned, the device is a local SAS:
    Is Local SAS Device

  • Default system services always start on ESXi hosts provisioned with Auto Deploy
    For ESXi hosts provisioned with Auto Deploy, the Service Startup Policy in the Service Configuration section of the associated host profile is not fully honored. In particular, if one of the services that is turned on by default on ESXi has a Startup Policy value of off, that service still starts at the boot time on the ESXi host provisioned with Auto Deploy.

    Workaround: Manually stop the service after booting the ESXi host.

  • Information retrieval from VMWARE-VMINFO-MIB does not happen correctly after an snmpd restart
    Some information from VMWARE-VMINFO-MIB might be missing during SNMPWalk after you restart the snmpd daemon using /etc/init.d/snmpd restart from the ESXi Shell.

    Workaround: Do not use /etc/init.d/snmpd restart. You must use the esxcli system snmp set --enable command to start or stop the SNMP daemon. If you used /etc/init.d/snmpd restart to restart snmpd from the ESXi Shell, restart Hostd, either from DCUI or by using /etc/init.d/hostd restart from the ESXi Shell.

vCenter Server and vSphere Client Issues
  • Enabling or Disabling View Storage Accelerator might cause ESXi hosts to lose connectivity to vCenter Server
    If VMware View is deployed with vSphere 5.1, and a View administrator enables or disables View Storage Accelerator in a desktop pool, ESXi 5.1 hosts might lose connectivity to vCenter Server 5.1.

    The View Storage Accelerator feature is also called Content Based Read Caching. In the View 5.1 View Administrator console, the feature is called Host caching.

    Workaround: Do not enable or disable View Storage Accelerator in View environments deployed with vSphere 5.1.

Virtual Machine Management Issues
  • Virtual Machine compatibility upgrade from ESX 3.x and later (VM version 4) incorrectly configures the Windows virtual machine Flexible adapter to the Windows system default driver
    If you have a Windows guest operating system with a Flexible network adapter that is configured for the VMware Accelerated AMD PCnet Adapter driver, when you upgrade the virtual machine compatibility from ESX 3.x and later (VM version 4) to any later compatibility setting, for example, ESXi 4.x and later (VM version 7),Windows configures the flexible adapter to the Windows AMD PCNET Family PCI Ethernet Adapter default driver.
    This misconfiguration occurs because the VMware Tools drivers are unsigned and Windows picks up the signed default Windows driver. Flexible adapter network settings that existed before the compatibility upgrade are lost, and the network speed of the NIC changes from 1Gbps to 10Mbps.

    Workaround: Configure the Flexible network adapters to use the VMXNET driver from the Windows guest OS after you upgrade the virtual machine's compatibility. If your guest is updated with ESXi5.1 VMware Tools, the VMXNET driver is installed in the following location: C:\Program Files\Common Files\VMware\Drivers\vmxnet\.

  • When you install VMware Tools on a virtual machine and reboot, the network becomes unusable
    On virtual machines with CentOS 6.3 and Oracle Linux 6.3 operating systems, the network becomes unusable after a successful installation of VMware Tools and a reboot of the virtual machine. When you attempt to manually get the IP address from a DHCP server or set a static IP address from the command line, the error Cannot allocate memory appears.
    The problem is that the Flexible network adapter, which is used by default, is not a good choice for those operating systems.

    Workaround: Change the network adapter from Flexible to E1000 or VMXNET 3, as follows:

    1. Run the vmware-uninstall-tools.pl command to uninstall VMware Tools.
    2. Power off the virtual machine.
    3. In the vSphere Web Client, right-click the virtual machine and select Edit Settings.
    4. Click Virtual Hardware, and remove the current network adapter by clicking the Remove icon.
    5. Add a new Network adapter, and choose the adapter type E1000 or VMXNET 3.
    6. Power on the virtual machine.
    7. Reinstall VMware Tools.

  • Clone or migration operations that involve non-VMFS virtual disks on ESXi fail with an error
    No matter whether you use the vmkfstools command or the client to perform a clone, copy, or migration operation on the virtual disks of hosted formats, the operation fails with the following error message: The system cannot find the file specified.

    Workaround: To perform a clone, copy, or migration operation on the virtual disks of hosted formats, you need to load the VMkernel multiextent module into ESXi.

    1. Log in to ESXi Shell and load the multiextent module.
      # vmkload_mod multiextent
    2. Check if any of your virtual machine disks are of a hosted type. Hosted disks end with the -s00x.vmdk extension.
    3. Convert virtual disks in hosted format to one of the VMFS formats.
      1. Clone source hosted disk test1.vmdk to test2.vmdk.
        # vmkfstools -i test1.vmdk test2.vmdk -d zeroedthick|eagerzereodthick|thin
      2. Delete the hosted disk test1.vmdk after successful cloning.
        # vmkfstools -U test1.vmdk
      3. Rename the cloned vmfs type disk test2.vmdk to test1.vmdk.
        # vmkfstools -E test2.vmdk test1.vmdk
    4. Unload the multiextent module.
      # vmkload_mod -u multiextent

  • A virtual machine does not have an IP address assigned to it and does not appear operational
    This issue is caused by a LUN reset request initiated from a guest OS. This issue is specific to IBM XIV Fibre Channel array with software FCoE configured in ESXi hosts. Virtual machines that reside on the LUN show the following problems:

    • No IP address is assigned to the virtual machines.
    • Virtual machines cannot power on or power off.
    • No mouse cursor is showing inside the console. As a result, there is no way to control or interact with the affected virtual machine inside the guest OS.

    Workaround: From your ESXi host, reset the LUN where virtual machines that experience troubles reside.

    1. Run the following command to get the LUN's information:
      # vmkfstools -P /vmfs/volumes/DATASTORE_NAME
    2. Search for the following line in the output to obtain the LUN's UID:
      Partitions spanned (on 'lvm'): eui.001738004XXXXXX:1
      eui.001738004XXXXXX is the device UID.
    3. Run the following command to reset the LUN:
      # vmkfstools -L lunreset /vmfs/devices/disks/eui.001738004XXXXXX
    4. If a non-responsive virtual machine resides on a datastore that has multiple LUNs associated with it, for example, added extents, perform the LUN reset for all datastore extents.

Migration Issues
  • Attempts to use Storage vMotion to migrate multiple linked-clone virtual machines fail
    This failure typically affects linked-clone virtual machines. The failure occurs when the size of delta disks is 1MB and the Content Based Read Cache (CBRC) feature has been enabled in ESXi hosts. You see the following error message: The source detected that the destination failed to resume.

    Workaround: Use one of the following methods to avoid Storage vMotion failures:

    • Use 4KB as the delta disk size.

    • Instead of using Storage vMotion, migrate powered-off virtual machines to a new datastore.

VMware HA and Fault Tolerance Issues
  • Fault tolerant virtual machines crash when set to record statistics information on a vCenter Server beta build
    The vmx*3 feature allows users to run the stats vmx to collect performance statistics for debugging support issues. The stats vmx is not compatible when Fault Tolerance is enabled on a vCenter Server beta build.

    Workaround: When enabling Fault Tolerance, ensure that the virtual machine is not set to record statistics on a beta build of vCenter Server.

Supported Hardware Issues
  • PCI Unknown Unknown status is displayed in vCenter Server on the Apple Mac Pro server
    The hardware status tab in vSphere 5.1 displays Unknown Unknown for some PCI devices on the Apple Mac Pro. This is because of missing hardware descriptions for these PCI devices on the Apple Mac Pro. The display error in the hardware status tab does not prevent these PCI devices from functioning.

    Workaround: None.

  • PCI Unknown Unknown status is displayed in vCenter Server on the AMD PileDriver
    The hardware status tab in vSphere 5.1 displays Unknown Unknown for some PCI devices on the AMD PileDriver. This is because of missing hardware descriptions for these PCI devices on the AMD PileDriver. The display error in the hardware status tab does not prevent these PCI devices from functioning.

    Workaround: None.

  • DPM is not supported on the Apple Mac Pro server
    The vSphere 5.1 distributed power management (DPM) feature is not supported on the Apple Mac Pro. Do not add the Apple Mac Pro to a cluster that has DPM enabled. If the host enters "Standby" state, it fails to exit the standby state when the power on command is issued and displays an operation timed out error. The Apple Mac Pro cannot wake from the software power off command that is used by vSphere when putting a host in standby state.

    Workaround: If the Apple Mac Pro host enters "Standby" you must power on the host by physically pressing the power button.

  • IPMI is not supported on the Apple Mac Pro server
    The hardware status tab in vSphere 5.1 does not display correct data or there is missing data for some of the hardware components on the Apple Mac Pro. This is because IPMI is not supported on the Apple Mac Pro.

    Workaround: None.

Miscellaneous Issues
  • After a network or storage interruption, syslog over TCP, syslog over SSL, and storage logging do not restart automatically
    After a network or storage interruption, the syslog service does not restart automatically in certain configurations. These configurations include syslog over TCP, syslog over SSL, and the interrupt storage logging.

    Workaround: Restart syslog explicitly by running the following command:
    esxcli system syslog reload You can also configure syslog over UDP, which restarts automatically.