VMware

VMware ESXi 5.0 Update 3 Release Notes

ESXi 5.0 Update 3 | 17 OCT 2013 | Build 1311175

Last updated: 31 OCT 2013

What's in the Release Notes

These release notes cover the following topics:

What's New

The following information describes some of the enhancements available in this release of VMware ESXi:

  • Resolved Issues - This release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 5.0

Features and known issues of ESXi 5.0 are described in the release notes for each release. To view release notes for earlier releases of ESXi 5.0, click one of the following links:

Internationalization

VMware vSphere 5.0 Update 3 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese

vSphere Client Locale Forcing Mode

With vSphere 5.0 Update 3, you can configure the VMware vSphere Client to provide the interface text in English even when the machine on which it is running is not English. You can set this configuration for the duration of a single session by supplying a command-line switch. This configuration applies to the interface text and does not affect other locale-related settings such as date and time or numeric formatting.

The following vSphere Client command causes the individual session to appear in English:

vpxClient -locale en_US

Compatibility and Installation

ESXi, vCenter Server, and vSphere Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and previous versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Client, and optional VMware products. In addition, check this site for information about supported management and backup agents before installing ESXi or vCenter Server.

The vSphere Web Client and the vSphere Client are packaged with the vCenter Server and modules ZIP file. You can install one or both clients from the VMware vCenter™ Installer wizard.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 5.0.3 adds support for ESXi 5.0 Update 3 and vCenter Server 5.0 Update 3 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.


Hardware Compatibility for ESXi

To determine which processors, storage devices, SAN arrays, and I/O devices are compatible with vSphere 5.0 Update 3, use the ESXi 5.0 Update 3 information in the VMware Compatibility Guide.

Upgrades and Installations for supported CPUs. vSphere 5.0 Update 3 supports only CPUs that have LAHF and SAHF CPU instruction sets. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5.0 Update 3. For CPU support, see the VMware Compatibility Guide.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 5.0 Update 3, use the ESXi 5.0 Update 3 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines with virtual hardware versions 4.0 and later are supported with ESXi 5.0 Update 3. Hardware version 3 is no longer supported. To use hardware version 3 virtual machines on ESXi 5.0 Update 3, upgrade virtual hardware. See the vSphere Upgrade documentation.

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for step-by-step guidance on installing and configuring ESXi and vCenter Server.

After successful installation, you must perform some licensing, networking, and security configuration. For information about these configuration tasks, see the following guides in the vSphere documentation.

 

Migrating Third-Party Solutions

ESX/ESXi hosts might contain third-party software, such as Cisco Nexus 1000V VEMs or EMC PowerPath modules. The ESXi 5.0 architecture is changed from ESX/ESXi 4.x so that customized third-party software packages (VIBs) cannot be migrated when you upgrade from ESX/ESXi 4.x to ESXi 5.0 and later.
When you upgrade a 4.x host with custom VIBs that are not in the upgrade ISO, you can proceed with the upgrade but will receive an error message listing the missing VIBs. To upgrade or migrate such hosts successfully, you must use Image Builder to create a custom ESXi ISO image that includes the missing VIBs. To upgrade without including the third-party software, use the ForceMigrate option or select the option to remove third-party software modules during the remediation process in vSphere Update Manager. For information about how to use Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation. For information about upgrading with third-party customizations, see the vSphere Upgrade and Installing and Administering VMware vSphere Update Manager documentation. For information about upgrading with vSphere Update Manager, see the vSphere Upgrade and Installing and Administering VMware vSphere Update Manager documentation.

L3-routed NFS Storage Access

vSphere 5.0 Update 3 supports L3 routed NFS storage access when your environment meets the following conditions:
  • Use Cisco's Hot Standby Router Protocol (HSRP) in IP Router. If you are using a non-Cisco router, be sure to use Virtual Router Redundancy Protocol (VRRP) instead.
  • Use Quality of Service (QoS) to prioritize NFS L3 traffic on networks with limited bandwidths, or on networks that experience congestion. See your router documentation for details.
  • Follow Routed NFS L3 best practices recommended by your storage vendor. Contact your storage vendor for details.
  • Disable Network I/O Resource Management (NetIORM)
  • If you are planning to use systems with top-of-rack switches or switch-dependent I/O device partitioning, contact your system vendor for compatibility and support.
In an L3 environment the following additional restrictions are applicable:
  • The environment does not support VMware Site Recovery Manager.
  • The environment supports only NFS protocol. Do not use other storage protocols such as FCoE over the same physical network.
  • The NFS traffic in this environment does not support IPv6.
  • The NFS traffic in this environment can be routed only over a LAN. Other environments such as WAN are not supported.
  • The environment does not support Distributed Virtual Switch (DVS).

Upgrades for This Release

For instructions about how to upgrade vCenter Server and ESXi hosts, see the vSphere Upgrade documentation.

Upgrading VMware Tools

VMware ESXi 5.0 Update 3 contains the latest version of VMware Tools. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the VMware Tools Resolved Issues for a list of issues resolved in this release of ESX related to VMware Tools.

To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).

ESX/ESXi Upgrades

You can upgrade ESX/ESXi hosts to ESXi 5.0 Update 3 in several ways.

  • vSphere Update Manager. If your site uses vCenter Server, use vSphere Update Manager to perform an orchestrated host upgrade or an orchestrated virtual machine upgrade from ESX/ESXi 4.0, 4.1, and ESXi 5.0. See the instructions in the vSphere Upgrade documentation, or for complete documentation about vSphere Update Manager, see the Installing and Administering VMware vSphere Update Manager documentation.

  • Upgrade interactively using an ESXi installer ISO image on CD-ROM or DVD. You can run the ESXi 5.0 Update 3 installer from a CD-ROM or DVD drive to perform an interactive upgrade. This method is appropriate for upgrading a small number of hosts.

  • Perform a scripted upgrade. You can upgrade or migrate from ESXi/ESX 4.x hosts to ESXi 5.0 Update 3 by running an update script, which provides an efficient, unattended upgrade. Scripted upgrades also provide an efficient way to deploy multiple hosts. You can use a script to upgrade ESXi from a CD-ROM or DVD drive, or by PXE-booting the installer.

  • ESXCLI: You can update and apply patches to ESXi 5.x hosts using the esxcli command-line utility. You cannot use esxcli to upgrade ESX/ESXi 4.x hosts to ESXi 5.0 Update 3.
Supported Upgrade Paths for Upgrade to ESXi 5.0 Update 3 :

Upgrade Deliverables

Supported Upgrade Tools

Supported Upgrade Paths to ESXi 5.0 Update 3

ESX 4.0:
Includes
ESX 4.0 Update 1
ESX4.0 Update 2

ESX4.0 Update 3
ESX 4.0 Update 4

ESXi 4.0:
Includes
ESXi 4.0 Update 1
ESXi 4.0 Update 2

ESXi 4.0 Update 3
ESXi 4.0 Update 4

ESX 4.1:
Includes
ESX 4.1 Update 1
ESX 4.1 Update 2

ESX 4.1 Update 3

 

ESXi 4.1:
Includes
ESXi 4.1 Update 1

ESXi 4.1 Update 2
ESXi 4.1 Update 3

ESXi 5.0:
Includes
ESXi 5.0 Update 1
ESXi 5.0 Update 2

VMware-VMvisor-Installer-5.0.0.update03-1311175.x86_64.iso

 

  • VMware vCenter Update Manager
  • CD Upgrade
  • Scripted Upgrade

Yes

Yes

Yes

Yes

Yes*

update-from-esxi5.0-5.0_update03.zip
  • VMware vCenter Update Manager
  • ESXCLI
  • VMware vSphere CLI

No

No

No

No

Yes

Using patch definitions downloaded from VMware portal (online)

VMware vCenter Update Manager with patch baseline

No

No

No

No

Yes

* Note: Upgrade from ESXi 5.0 using VMware-VMvisor-Installer-5.0.0.update03-1311175.x86_64.iso with VMware vCenter Update Manager is not supported. Instead, you must upgrade using update-from-esxi5.0-5.0_update03.zip with VMware vCenter Update Manager.

The vSphere 5.0 Update 3 deliverables cannot be guaranteed for vSphere 5.1 upgrade path compatibility.

VMware vSphere SDKs

VMware vSphere provides a set of SDKs for vSphere server and guest operating system environments.

  • vSphere Management SDK. A collection of software development kits for the vSphere management programming environment. The vSphere Management SDK contains the following vSphere SDKs:

    • vSphere Web Services SDK. Includes support for new features available in ESXi 5.0 and later and vCenter Server 5.0 and later server systems. You can also use this SDK with previous versions of ESX/ESXi and vCenter Server. For more information, see the VMware vSphere Web Services SDK Documentation.

    • vSphere vCenter Storage Monitoring Service (SMS) SDK. SMS 2.0 is supported on vCenter Server 5.0. For more information, see vCenter SMS SDK Documentation.

    • vSphere ESX Agent Manager (EAM) SDK. EAM 1.0 is supported on ESXi 5.0 Update 3. For more information, see vSphere ESX Agent Manager.

  • vSphere Guest SDK. The VMware vSphere Guest SDK 4.0 is supported on ESXi 5.0 Update 3. For more information, see the VMware vSphere Guest SDK Documentation.

  • VMware vSphere SDK for Perl. The SDK for Perl 5.0 is supported on vSphere 5.0 Update 3. For more information, see the vSphere SDK for Perl Documentation.

Open Source Components for VMware vSphere

The copyright statements and licenses applicable to the open source software components distributed in vSphere 5.0 Update 3 are available at http://www.vmware.com/download/vsphere/open_source.html, on the Open Source tab. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent generally available release of vSphere.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi500-Update03 contains the following individual bulletins:

ESXi500-201310201-UG: Updates the ESXi 5.0 esx-base vib
ESXi500-201310202-UG: Updates the ESXi 5.0 tools-light vib
ESXi500-201310203-UG: Updates the ESXi 5.0 misc-drivers vib
ESXi500-201310204-UG: Updates the ESXi 5.0 scsi-hpsa vib

Patch Release ESXi500-Update03 Security-only contains the following individual bulletins:

ESXi500-201310101-SG: Updates the ESXi 5.0 esx-base vib

ESXi500-201310102-SG: Updates the ESXi 5.0 net-bnx2x vib
ESXi500-201310103-SG: Updates the ESXi 5.0 misc-drivers vib

Patch Release ESXi500-Update03 contains the following image profiles:

ESXi-5.0.0-20131002001-standard

ESXi-5.0.0-20131002001-no-tools

Patch Release ESXi500-Update03 Security-only contains the following image profiles:

ESXi-5.0.0-20131001001s-standard
ESXi-5.0.0-20131001001s-no-tools


For information on patch and update classification, see KB 2014447.

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

CIM and API

  • Small-Footprint CIM Broker daemon might stop responding when CIM provider fails
    On an ESXi host, Small-Footprint CIM Broker daemon (sfcbd) might stop responding frequently when CIM provider fails during idle timeout.

    This issue is resolved in this release.
  • LSI CIM provider leaks file descriptors
    LSI CIM provider (one of sfcb process) leaks file descriptors. This might casue sfcb-hhrc to stop and sfcbd to restart. The syslog file might log messages similar to the following:

    sfcb-LSIESG_SMIS13_HHR[ ]: Error opening socket pair for getProviderContext: Too many open files
    sfcb-LSIESG_SMIS13_HHR[ ]: Failed to set recv timeout (30) for socket -1. Errno = 9
    ...
    ...

    sfcb-hhrc[ ]: Timeout or other socket error
    sfcb-hhrc[ ]: TIMEOUT DOING SHARED SOCKET RECV RESULT ( )


    This issue is resolved in this release.
  • WS-Management GetInstance () action against "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_SoftwareIdentity?InstanceID=46.10000" might issue a wsa:DestinationUnreachable fault on some ESXi server
    WS-Management GetInstance () action against "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_SoftwareIdentity?InstanceID=46.10000" might issue a wsa:DestinationUnreachable fault on some ESXi server. The OMC_MCFirmwareIdentity object path is not consistent for CIM gi/ei/ein operations on the system with Intelligent Platform Management Interface (IPMI) Baseboard Management Controller (BMC) Sensor. As a result, WS-Management GetInstance () action issues a wsa:DestinationUnreachable fault on ESXi server.

    This issue is resolved in this release.
  • ESXi host is disconnected from vCenter Server due to sfcbd exhausting inodes
    ESXi hosts disconnect from vCenter Server and cannot be reconnected to the vCenter Server. This issue is caused by the hardware monitoring service (sfcdb) that populates the /var/run/sfcb directory with over 5000 files.

    The hostd.log file located at /var/log/ indicates that the host is out of space:
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device


    The vmkernel.log file located at /var/log indicates that it is out of inodes:
    cpu4:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
    cpu5:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.

    This issue is resolved in this release.
  • The hardware monitoring service stops and the Hardware Status tab only displays an error message
    The Hardware Status tab does not display health statuses and displays an error message similar to the following:

    Hardware monitoring service on this host is not responding or not available.

    The hardware monitoring service (sfcdb) stops and the syslog file might display entries similar to the following:

    sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 6750210)
    sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 6 payLoadSize 19 chunkSize 0 from 12 resp 6750210
    sfcb-smx[xxxxxx]: spRecvReq returned error -1. Skipping message.
    sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 4)
    sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 220 payLoadSize 116 chunkSize 104 from 12 resp 4
    ...
    ...
    sfcb-vmware_int[xxxxxx]: spGetMsg receiving from 40 419746-11 Resource temporarily unavailable
    sfcb-vmware_int[xxxxxx]: rcvMsg receiving from 40 419746-11 Resource temporarily unavailable
    sfcb-vmware_int[xxxxxx]: Timeout or other socket error


    This issue is resolved in this release.
  • CIM Server returns incorrect PerceivedSeverity value for indication
    When the IBM Systems Director (ISD) is used for monitoring the ESX server, the CIM server returns incorrect PerceivedSeverity value for indication to ISD. Correcting the sensor type and the PerceivedSeverity return value solves this issue.

    This issue is resolved in this release.
  • Cannot  monitor ESXi host's hardware status
    When the SFCBD service is enabled in the trace mode and the service stops running, the Hardware Status tab for an ESXi host might report an error. Any third-party tool might not be able to monitor the ESXi host's hardware status.

    This issue is resolved in this release.
  • Incorrect error messages might be displayed by CIM providers
    Incorrect error messages similar to the following might be displayed by CIM providers: :
    \"Request Header Id (886262) != Response Header reqId (0) in request to provider 429 in process 5. Drop response.\"
    This issue is resolved in this release by updating the error log and restarting the sfcbd management agent to display correct error messages similar to the following:
    Header Id (373) Request to provider 1 in process 0 failed. Error:Timeout (or other socket error) waiting for response from provider.

  • This issue is resolved in this release.

Miscellaneous

  • Netlogond might stop responding and ESXi host might lose Active Directory functionality
    Netlogond might consume high memory in an Active Directory environment with multiple unreachable domain controllers. As a result, Netlogond might fail and ESXi host might lose Active Directory functionality.

    This issue is resolved in this release.
  • You might not be able to enable a High Availability (HA) cluster after a single host of the same HA cluster is placed in a maintenance mode
  • This issue occurs when the value of the inode desciptor number is not correctly set in ESX root file system (VisorFS) and as a result the stat calls on those inodes fail.
    This issue is resolved in this release.
  • ESXi host displays incorrect values for the resourceCpuAllocMax system counter
    When you retrieve the value for the resourceCpuAllocMax and resourceMemAllocMax system counters from the Performance >> Advanced Performance Charts >> System Chart options against the host system, the ESX host returns incorrect values. This issue is observed on vSphere Client connected to a vCenter Server.

    This issue is resolved in this release.
  • Attempts to set up a filter rule that contains the drive letter for a volume with an unsupported file system might result in the failure of Windows Server 2003 or Windows XP virtual machine with a blue screen
    When you attempt to set up a filter rule that contains the drive letter for a volume with an unsupported file system, Windows Server 2003 or Windows XP virtual machine might fail with a blue screen and might display error messages similar to the following:
    Error code 1000007e, parameter1 c0000005, parameter2 baee5d3b, parameter3 baa69a64, parameter4 baa69760.
    For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.Data:
    0000: 53 79 73 74 65 6d 20 45 System E
    0008: 72 72 6f 72 20 20 45 72 rror Er
    0010: 72 6f 72 20 63 6f 64 65 ror code
    0018: 20 31 30 30 30 30 30 37 10000070020: 65 20 20 50 61 72 61 6d e Param
    0028: 65 74 65 72 73 20 63 30 eters c0
    0030: 30 30 30 30 30 35 2c 20 000005,
    0038: 62 61 65 65 35 64 33 62 baee5d3b
    0040: 2c 20 62 61 61 36 39 61 , baa69a
    0048: 36 34 2c 20 62 61 61 36 64, baa6
    0050: 39 37 36 30 9760

    This issue mostly occurs when the Q:\ drive letter created by Microsoft App-V solution is added in the filter rule.

    This issue is resolved in this release.
  • Memory controller error messages might be reported incorrectly as TLB errors when ESXi hosts fail with purple screen
    When ESXi hosts fail with a purple screen, the memory controller error messages might be incorrectly reported as Translation Look-aside Buffer (TLB) error messages, Level 2 TLB Error.

    This issue is resolved in this release.

Networking

  • ESXi host fails to report configuration issues if core dump partition and core dump collector services are not configured
    If an ESX host is not configured with a core dump partition and is not configured to direct core dumps to a dump collector service, then we might lose important troubleshooting information. Including a check for configuration and core dump partition at the start of hostd service solves this issue.

    This issue is resolved in this release.
  • ESXi hosts that are booted in stateless mode appear with name localhost in the syslog file
    When a stateless ESXi hosts is rebooted and the host is configured to obtain DNS configuration and host name from a DHCP server, the syslog file displays the host's name as localhost instead of the host name obtained from the DHCP server. As a result, for a remote syslog collector, all ESXi hosts appear to have the same host name.

    This issue is resolved in this release.
  • ESXi host might fail with a purple screen and reports a page file exception error
    When virtual machines are configured with e1000 network adapters, ESXi hosts might fail with a purple diagnostic screen and display messages similar to the following:

    @BlueScreen: #PF Exception 14 in world 8229:idle37 IP 0x418038769f23 addr 0xc
    0x418038600000 VMK uptime: 1:13:10:39.757
    0x412240947898:[0x418038769f23]E1000FinalizeZeroCopyPktForTx@vmkernel#nover+0x1d6 stack: 0x41220000
    0x412240947ad8:[0x41803877001e]E1000PollTxRing@vmkernel#nover+0xeb9 stack: 0x41000c958000
    0x412240947b48:[0x418038771136]E1000DevAsyncTx@vmkernel#nover+0xa9 stack: 0x412240947bf8
    0x412240947b98:[0x41803872b5f3]NetWorldletPerVMCB@vmkernel#nover+0x8e stack: 0x412240947cc0
    0x412240947c48:[0x4180386ed4f1]WorldletProcessQueue@vmkernel#nover+0x398 stack: 0x0
    0x412240947c88:[0x4180386eda29]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x2
    0x412240947ce8:[0x4180386182fc]BHCallHandlers@vmkernel#nover+0xbb stack: 0x100410000000000
    0x412240947d28:[0x4180386187eb]BH_Check@vmkernel#nover+0xde stack: 0xf2d814f856ba
    0x412240947e58:[0x4180387efb41]CpuSchedIdleLoopInt@vmkernel#nover+0x84 stack: 0x412240947e98
    0x412240947e68:[0x4180387f75f6]CpuSched_IdleLoop@vmkernel#nover+0x15 stack: 0x70
    0x412240947e98:[0x4180386460ea]Init_SlaveIdle@vmkernel#nover+0x13d stack: 0x0
    0x412240947fe8:[0x4180389063d9]SMPSlaveIdle@vmkernel#nover+0x310 stack: 0x0

    This issue is resolved in this release.
  • Default gateway is left blank if the port group containing the default gateway is disabled and then re-enabled
    If you disable a DHCP enabled port group that contains the default gateway, the default gateway is left blank. When you re-enable the port group, the default gateway is still left blank.

    This issue is resolved in this release. The default gateway is updated and not left blank.
  • ESXi hosts might fail with a purple screen when network traffic passes through the bnx2x device driver
    When network traffic passes through the bnx2x device driver and the vmklinux receives the Large Receive Offload (LRO) generated packets, the network packets might be dropped resulting in the failure of the ESXi hosts with a purple screen.
    The ESXi hosts experience a divide-by-zero exception during the TSO split and finally results in the failure of the host.
    This issue occurs when the the bnx2x driver sends the LRO packet with a TCP Segmentation Offload (TSO) MSS value set to zero.
    Also, the ESXi host fails when the packet received is invalid for any one of the following reasons:
    • If the GSO size is zero
    • If the GSO type is not supported
    • The vLAN ID is incorrect

    This issue is resolved in this release.
  • ESXi host might fail with a purple diagnostic screen due to a conflict between two DVFilter processes
  • If two DVFilter processes attempt to manage a single configuration variable at the same time, while one process clears the existing filter configuration and the other process attempts to lock it, the ESXi host might fail. This issue occurs when you shut down the guest operating system during the DVFilter cleanup process.
    This issue is resolved in this release.
  • Snapshot taken on a virtual machine with vmxnet3 NIC has incorrect network traffic statistics
    When you take a snapshot of a virtual machine with vmxnet3 NIC, the virtual machine’s network interface will be disconnected and re-connected, which resets the broadcast counter resulting in an incorrect representation of network statistics.

    This issue is resolved in this release.
  • ESXi hosts might fail with a purple screen due to race conditions in ESXi TCP/IP stack
    ESXi hosts might fail with a purple screen and display error messages similar to the following:
    2013-02-22T15:33:14.296Z cpu8:4104)@BlueScreen: #PF Exception 14 in world 4104:idle8 IP 0x4180083e796b addr 0x1
    2013-02-22T15:33:14.296Z cpu8:4104)Code start: 0x418007c00000 VMK uptime: 58:11:48:48.394
    2013-02-22T15:33:14.298Z cpu8:4104)0x412200207778:[0x4180083e796b]ether_output@ # +0x4e stack: 0x41000d44f360
    2013-02-22T15:33:14.299Z cpu8:4104)0x4122002078b8:[0x4180083f759d]arpintr@ # +0xa9c stack: 0x4100241a4e00

    This issue occurs due to race conditions in ESXi TCP/IP stack.

    This issue is resolved in this release.
  • NFS datastores connected through Layer 3 routed network might exhibit high GAVG for virtual machines running on it for low IOPS
    When NFS datastores are connected through Layer 3 routed network and NFS vmknic is in a different subnet than NFS filer, the datastores might exhibit high Guest Average Latency (GAVG) for virtual machines running on it. This issue occurs when I/O Operations Per Second (IOPS) is low. For an IOPS value of 1 or less, the GAVG value for NFS datastores might be as high as 40ms. When heavy I/O load is sent, GAVG values become less for the NFS datastores.

    This issue is resolved in this release.
  • Attempts to obtain the permanent MAC address for a VMXNET3 NIC might fail
    When you use the ETHTOOL_GPERMADDR ioctl to obtain the permanent MAC address for a VMXNET3 NIC, if the Linux kernel version is between 2.6.13 and 2.6.23, no results are obtained. If the Linux kernel version is later than 2.6.23, the MAC address returned contains all zeros.

    This issue is resolved in this release.
  • Virtual machines with e1000 NIC driver placed on D3 suspended mode might fail
    Virtual machine might fail and error messages similar to the following might be written to the vmware.log file when a guest operating system with e1000 NIC driver is placed on D3 suspended mode:

    2013-08-20T10:14:35.121Z[+13.605]| vcpu-0| SymBacktrace[2] 000003ffff023be0 rip=000000000039d00f
    2013-08-20T10:14:35.121Z[+13.606]| vcpu-0| Unexpected signal: 11

    This issue occurs for the virtual machine that uses IP aliasing and the number of IP addresses exceed 10.

    This issue is resolved in this release.
  • Solaris virtual machines that use the VMXNET3 network adapter might repeatedly report messages in the log files
    When a message from the Solaris virtual machine has too many fragments to fit in the TX ring, the VMXNET3 network adapter reports the following messages repeatedly in the log files:

    last message repeated 274 times
    vmxnet3s: [ID 450982 kern.notice] vmxnet3s:0: overfragmented mp (16)
    last message repeated 399 times
    vmxnet3s: [ID 450982 kern.notice] vmxnet3s:0: overfragmented mp (16)
    last message repeated 398 times
    vmxnet3s: [ID 450982 kern.notice] vmxnet3s:0: overfragmented mp (16)
    last message repeated 393 times
    vmxnet3s: [ID 450982 kern.notice] vmxnet3s:0: overfragmented mp (16)
    last message repeated 399 times
    vmxnet3s: [ID 450982 kern.notice] vmxnet3s:0: overfragmented mp (16)


    This issue is resolved in this release.
  • Virtual machines that access client device CD-ROM might stop responding if the vSphere Client network connection is interrupted
    If the vSphere Client network connection is interrupted when a virtual machine is using a client device CD-ROM, the virtual machine might stop responding and might not be accessible on the network for some time.

    This issue is resolved in this release.
  • Linux commands ip link or ip addr might display link state for VMXNET3 adapters as Unknown instead of UP
    When you create VMXNET3 adapters on the guest operating system, ip link or ip addr Linux commands might display the link state as Unknown instead of Up.
    This issue occurs when the default link state for the adapters is set as carrier ok mode, and as a result, the operstate is not updated.

    This release resolves the issue by setting the default link state for the adapters to no carrier mode.
  • Virtual machines might disconnect from the network after they are restarted or migrated
    When using vShield Endpoint and Deep security, an issue with the DvFilter module might lead to netGPHeap depletion and memory leak. This might casue virtual machines to disconnect from the network after they are restarted or migrated using vMotion.
    The log files might display messages similar to the following:
    2012-11-30T11:29:06.368Z cpu18:923386)WARNING: E1000: 8526: failed to enable port 0x30001b6 on vSwitch1: Out of memory

    This issue is resolved in this release.
  • Virtual machines might fail to monitor outbound network traffic when virtual network adapters are used in promiscuous mode
    If you use virtual network adapters in promiscuous mode to track network activity, a certain issue with the port mirroring feature might disable a mirror port and cause virtual machines to stop tracking the outbound network traffic.

    This issue is resolved in this release.

Security

  • Update to OpenSSL library addresses multiple security issues
    The ESXi userworld OpenSSL library is updated to version openssl-0.9.8y to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2013-0169 and CVE-2013-0166.
  • Update to libxml2 library addresses a security issues
    The ESXi userworld libxml2 library has been updated to resolve a security issue.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2013-0338 to this issue.
  • Update to libxslt
    The ESXi userworld libxslt package is updated.
  • VMware ESXi and ESX contain a vulnerability in hostd-vmdb
    To exploit this vulnerability, an attacker must intercept and modify the management traffic. Exploitation of the issue may lead to a Denial of Service of the hostd-vmdb service.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2013-5970 to this issue.

Server Configuration

  • Performance charts display a constant power usage of 0 watts for IBM System x iDataPlex dx360 M3 servers
    When you view the performance chart related to power usage for IBM System x iDataPlex dx360 M3, the chart shows a constant 0 watts. This issue occurs due to a change in the IPMI sensor IDs used by IBM System x iDataPlex dx360 M3 servers.

    This issue is resolved in this release.
  • Red Hat Enterprise Linux 4.8 32-bit virtual machine might show higher load average on ESXi 5.0 compared to ESX/ESXi 4.0
    A Red Hat Enterprise Linux 4.8 32-bit virtual machine with a workload that is mostly idle with intermittent or simultaneous wakeup of multiple tasks might show a higher load average on ESXi 5.0 as compared to ESX/ESXi 4.0.

    This issue is resolved in this release.
  • Multiple ESXi servers might stop responding while adding ESXi server to the vCenter Server
    When you attempt to add an ESXi server to the vCenter Server, multiple ESXi servers might stop responding and error message similar to the following might be displayed:
    Unable to access the specified host, either it doesn't exist, the server software is not responding, or there is a network problem.
    This issue occurs when a high volume of HTTP URL requests are sent to hostd and the hostd service fails.

    This issue is resolved in this release.
  • Virtual machines that have migrated from ESX/ESXi 3.5 to ESXi 5.0 might fail to reboot
    Multiple virtual machines might fail to reboot and generate VMX core file after reboot. This issue is seen with virtual machines that are migrated from ESX/ESXi 3.5 hosts to ESX/ESXi hosts with versions ESX/ESXi 4.0 Update 2, ESX/ESXi 4.1 Update 2, ESXi 5.0 Update 2, and later by using vMotion.

    This issue is resolved for ESXi 5.0 hosts in this release.
  • Host compliance check fails with an error related to extracting indication configuration
    When an invalid CIM subscription is present in the system and you perform a host profile compliance check against a host, an error message similar to the following might be displayed:

    Error extracting indication configuration: (6, u'The requested object could not be found')

    You cannot apply the host profile on the host.

    This issue is resolved in this release. You can apply host profiles even if there is a invalid indication in the host profile.
  • ESXi host becomes unresponsive when you shut down or reboot the ESXi using the Direct Console User Interface
    When you attempt to shut down or reboot an ESXi host through the Direct Console User Interface (DCUI) the host stops responding and the user is unable to complete the shutdown process.

    This issue is resolved in this release.
  • Attempts to apply a complex host profile might result in a timeout
    When you apply a complex host profile, for example, the one that contains large number of portgroups and datastores, the operation might time out with an error message similar to the following::
    2013-04-09T15:27:38.562Z [4048CB90 info 'Default' opID=12DA4C3C-0000057F-ee] [VpxLRO] -- ERROR task-302 -- -- vim.profile.host.profileEngine.HostProfileManager.applyHostConfig: vmodl.fault.SystemError:
    --> Result:
    --> (vmodl.fault.SystemError) {
    --> dynamicType = ,
    --> faultCause = (vmodl.MethodFault) null,
    --> reason = "",
    --> msg = "A general system error occurred: ",
    --> }
    The hostd default timeout is 10 minutes. As applyHostConfig is not a progressive task, the hostd service is unable to distinguish between failed task and long-running task during hostd timeout. As a result, the hostd service reports that the applyHostConfig has failed.

    This issue is resolved in this release by installing 30-minute timeout as a part of HostProfileManager Managed Object. However, this issue might still occur when you attempt to apply a large host profile and the task might exceeds 30-minute timeout limit. To workaround this issue, re-apply the host profile.
  • ESXi host might be disconnected from the vCenter Server when hostd fails
    ESXi hosts might be disconnected from the vCenter Server when hostd fails with error messages similar to the following:
    2013-06-04T11:47:30.242Z [6AF85B90 info 'ha-eventmgr'] Event 110 : /sbin/hostd crashed (1 time(s) so far) and a core file might have been created at/var/core/hostd-worker-zdump.000. This might have caused connections to the host to be dropped.
    This issue occurs when a check is performed to ensure correct cache configuration.

    This issue is resolved in this release.
  • Connection to the ESXi host might be lost when you execute ESXCLI commands or use monitoring tools that rely on SNMP agent
    When you execute ESXCLI commands or if you use monitoring tools that rely on data from the SNMP agent in ESXi, the connection to the ESXi host might be lost due to failure of the hostd service.

    This issue is resolved in this release.
  • Unable to assign permission to Active Directory users and groups
    After adding an ESXi 5.0 host to an Active Directory (AD) domain, an attempt to assign permission to AD users and groups might fail. You are unable to view the domain to which you have joined the host in the drop-down menu for adding permissions to AD users and groups. This issue occurs because the lsassd service on the host stops. The lsassd.log file displays entries similar to the following:

    20111209140859:DEBUG:0xff92a440:[AD_DsEnumerateDomainTrusts() /build/mts/release/bora-396388/likewise/esxi-esxi/src/linux/lsass/server/auth-providers/ad-provider/adnetapi.c:1127] Failed to enumerate trusts at host.your.domain.name.net (error 59)
    20111209140859:DEBUG:0xff92a440:[AD_DsEnumerateDomainTrusts() /build/mts/release/bora-396388/likewise/esxi-esxi/src/linux/lsass/server/auth-providers/ad-provider/adnetapi.c:1141] Error code: 40096 (symbol: LW_ERROR_ENUM_DOMAIN_TRUSTS_FAILED)


    This issue is resolved in this release.
  • ESXi host fails with a purple diagnostic screen due to buffer overflow and truncation in hpsc proc handler
    When you run the cat command on a HP Smart Array controller with 40 or more logical unit numbers, the ESXi host fails with a purple diagnostic screen. This happens because of the buffer overflow and data truncation in hpsc handler.

    This issue is resolved in this release.
  • When you boot from SAN it might take longer to discover the boot device depending on the network bandwidth
    When you boot from a SAN, the boot device discovery process might take more time to complete. Passing a rescan timeout parameter to ESX command line before the boot process allows the user to configure the timeout value, this resolves the issue.

    This issue is resolved in this release.
  • ESXi host might stop logging messages into log files due to memory related errors
    Insufficient memory allocation to the logging resource pool might cause ESXi to stop logging messages in log files. Messages similar to the following are displayed in the log files:

    <TimeStamp> vmsyslog.main : ERROR ] Watchdog 2625 fired (child 2626 died with status 256)!
    <TimeStamp> vmsyslog : CRITICAL] vmsyslogd daemon starting (69267)
    <TimeStamp> vmsyslog.main : ERROR ] Watchdog 2625 exiting
    <TimeStamp> vmsyslog.loggers.file : ERROR ] Gzip logfile /scratch/log/hostd0.gz failed
    <TimeStamp> vmsyslog.main : ERROR ] failed to write log, disabling
    <TimeStamp> vmsyslog.main : ERROR ] failed to send vob: [Errno 28] No space left on device


    This issue is resolved in this release. The logging memory pool limit is increased to 48MB.
  • VMkernel network interfaces other vmk0 lose their IP unbind on a DHCP Server
    When a VMkernel interface has acquired its IP lease from a DHCP server in another subnet, error messages similar to the following might be displayed by the DHCP server:
    2012-08-29T21:36:24Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:36:35Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:36:49Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:37:08Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:37:24Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:37:39Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:37:52Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:38:01Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:38:19Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:38:29Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:38:41Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:38:53Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:39:09Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    2012-08-29T21:39:24Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67

    Providing the DHCP server an interface on the same subnet as the VMkernel port to allow DHCP renewal resolves this issue.

    This issue is resolved in this release
  • VMKernel fails when a Virtual Machine Monitor returns an invalid Machine Page Number
    When VMX passes a VPN value to read a page, VMKernel fails to find a valid machine page number for that VPN value which results in the host failing with a purple diagnostic screen.

    This issue is resolved in this release.
  • ESX host might fail due to failure in memory allocation from world heap for firing traces
    ESX hosts might fail with a purple diagnostic screen due to failure in memory allocation from world heap for firing traces, a mechanism used by vmkernel to batch up after write traces on guest pages. This issue occurs when this failure of memory allocation is not handled properly.

    This issue is resolved in this release.
  • Running esxcfg-nas commands on an ESXi host results in PREF warning
    When you run esxcfg-nas -l command, ESXi host displays warning message similar to the following:
    PREF Warning: PreferenceGet(libdir) before Preference_Init, do you really want to use default?

    This issue is resolved in this release.
  • Hostd performance test execution results in regression issues
    During hostd performance tests, when you perform some virtual machine operations like Create nVMs, Reconfig nVMs, Clean nVMs, it might result in regression issues. This happens because a datastore refresh call is processed for every vdiskupdate message. Modifying the datastore refresh logic resolves this issue.

    This issue is resolved in this release.
  • ESXi host displays storage related Unknown messages in syslog.log files
    When vpxa writes syslog entries longer than 1024, vpxa categorizes the message body after 1024 bytes as Unknown and puts them in syslog.log file instead of vpxa.log file. As a result the ESXi host displays storage related Unknown messages in syslog.log files. In this release the line buffer limit is increased to resolve this issue.

    This issue is resolved in this release.

Storage

  • Performance issues might be observed on ESXi when lazy zeroed thick disks are migrated
    The migration rates of some lazy zeroed thick disks might be slow on some ESXi when compared to other disk transfers of the same size. This issue occurs when memory pages for the file system cache (buffer cache) falls in first 2MB region of memory. As a result, the migration rates become slow for ESXi.

    This issue is resolved in this release.
  • ESXi host fails with a purple diagnostic screen when you add a new host to a cluster
    When you add a new host to the cluster and reconfigure the High Availability, the ESXi host fails with a purple diagnostic screen.
    This issue is resolved in this release.
  • ESXi host might fail with a purple screen due to metadata corruption of LUNs
    When you perform certain virtual machine operations, an issue related to metadata corruption of LUNs might sometimes cause an ESXi host to fail with a purple screen and display error messages similar to the following:
    @BlueScreen: #DE Exception 0 in world 4277:helper23-15 @ 0x41801edccb6e3:21:13:31.624 cpu7:4277)Code start: 0x41801e600000
    VMK uptime: 3:21:13:31.6243:21:13:31.625 cpu7:4277)0x417f805afed0:[0x41801edccb6e]Fil3_DirIoctl@esx:nover+0x389 stack:
    0x410007741f603:21:13:31.625 cpu7:4277)0x417f805aff10:[0x41801e820f99]FSS_Ioctl@vmkernel:nover+0x5c stack:
    0x2001cf5303:21:13:31.625 cpu7:4277)0x417f805aff90:[0x41801e6dcf03]HostFileIoctlFn@vmkernel:nover+0xe2 stack:
    0x417f805afff03:21:13:31.625 cpu7:4277)0x417f805afff0:[0x41801e629a5a]helpFunc@vmkernel:nover+0x501 stack: 0x03:21:13:31.626 cpu7:4277)0x417f805afff8:[0x0] stack: 0x0


    This issue is resolved in this release. Metadata corruption of LUNs will now result in an error message.
  • NetApp has requested an update to the SATP claim rule which prevents the iSCSI from entering an unresponsive state
    NetApp has requested an update to the SATP claim rule which prevents the reservation conflict for a Logical Unit Number (LUN). The updated SATP claim rule uses the reset option to clear the reservation from the LUN and allows other users to set the reservation option.

    This issue is resolved in this release.
  • False Device Busy (D:0x8) status messages might be displayed in VMkernel log files when VMKlinux might incorrectly set the device status
    When VMKlinux sets the device status incorrectly, false Device Busy (D:0x8) status messages similar to the following are written to VMkernel log files:
    2013-04-04T17:56:47.668Z cpu0:4012)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x412441541f80) 0x16, CmdSN 0x1c9f from world 0 to dev "naa.600601601fb12d00565065c6b381e211" failed H:0x0 D:0x8 P:0x0 Possible sense data: 0x0 0x0 0x0
    This generates false alarms as the storage array does not send any Device Busy status message for SCSI commands.
    This issue is resolved in this release by correctly pointing to Host Bus Busy (H:0x2) status messages for issues in the device drivers similar to the following:
    2013-04-04T13:16:27.300Z cpu12:4008)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x4124819c2f00) 0x2a, CmdSN 0xfffffa80043a2350 from world 4697 to dev "naa.600601601fb12d00565065c6b381e211" failed H:0x2 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0

  • This issue is resolved in this release.
  • The size of a thick provisioned 2TB virtual machine disk is shown as 0 bytes in the datastore browser.
  • When you create a thick virtual machine disk file (VMDK) with a size of 2TB, the datastore browser incorrectly reports the VMDK disk size as 0.00 bytes.
    This issue is resolved in this release.
  • Cloning and cold migration of virtual machines with large VMDK and snapshot files might fail
    You might be unable to clone and perform cold migration of virtual machines with large virtual machine disk (VMDK) and snapshot files to other datastores. This issue occurs when the vpxa process exceeds the limit of memory allocation during cold migration. As a result, the ESXi host loses the connection from the vCenter Server and the migration fails.

    This issue is resolved in this release.
  • False alarms might be generated when provisioned space values for NFS datastore are incorrectly calculated
    Under certain conditions, provisioned space value for an NFS datastore might be calculated incorrectly and cause false alarms to be generated.

    This issue is resolved in this release.
  • vCenter Server or vSphere Client might get disconnected from the ESXi host during VMFS datastore creation
    vCenter Server or vSphere Client might get disconnected from the ESXi host during VMFS datastore creation. This issue occurs when hostd fails with the error message similar to the following in the hostd log:
    Panic: Assert Failed: \\\"matchingPart != __null\\\"
    The hostd process fails during VMFS datastore creation on disks with certain partition configuration that requires partition alignment.

    This issue is resolved in this release.
  • Microsoft failover cluster I/O might not survive Storage Fault
    Microsoft failover cluster I/O might fail to survive Storage Fault Tolerance, and I/O might fail with a reservation conflict. This issue occurs when two failover cluster virtual machines are placed on two different ESXi hosts and storage array is running in ALUA configuration.

    This issue is resolved in this release.
  • Unable to mount NFS datastore with remote path name of 115 characters or more
    You might be unable to mount NFS datastore that has a remote path name of 115 characters or more. An error message similar to the following is displayed:
    Unable to get Console path for Mount

    ESXi host maintains NAS volume as a combination of NFS server IP address and the complete path name of the exported share. This issue occurs when this combination exceeds 128 characters.

    This issue is resolved in this release by increasing the size of NAS volume to 1024.
  • ESXi host might stop responding from the vCenter Server when you re-signature a large number of VMFS snapshot volumes
    An ESXi host might stop responding intermittently or get disconnected from the vCenter Server when you re-signature a large number of Virtual Machine File System (VMFS) snapshot volumes.

    This issue is resolved in this release.

Upgrade and Installation

  • Host remediation against bulletins that have only Reboot impact fails
    During the remediation process of an ESXi host against a patch baseline that consists of bulletins that have only Reboot impact, Update Manager does not power off or suspend the virtual machines that are on the host. As a result, the host cannot enter maintenance mode, and the remediation cannot be completed.

    This issue is resolved in bulletins created in this release and later.
  • Unable to find state.tgz backup file on boot device while attempting to upgrade ESXi host
    When you attempt to upgrade an ESXi host using the ISO image, you might be unable to find the state.tgz backup file on the boot device.
    This issue occurs if the state.tgz backup file is not updated because you did not shut down your machine properly before upgrading. As a result, a No such a file exception error message is displayed when you reset the ESXi license.

    This issue is resolved in this release.
  • The esxcli command fails to handle datastore names which include spaces or parentheses
    Datastore names with spaces or parentheses are handled incorrectly by the esxcli command. This issue is observed when the user attempts an ESXi upgrade using the esxcli command.

    This issue is resolved in this release.
  • Syslog server configuration might not be migrated to new configurations while upgrading multiple ESXi 4.x to ESXi 5.x
    In ESXi 4.x, the configuration of Syslog.Local.DatastorePath path is stored in the /etc/syslog.conf file.
    However, in ESXi 5.x, the /etc/syslog.conf file is replaced by /etc/vmsyslog.conf file and the configuration of Syslog.global.logDir directory is stored in the /etc/vmsyslog.conf file.
    As a result, the Syslog server configurations of logfile and loghost attributes in the /etc/syslog.conf file are not migrated to the newly configured logdir and loghost attributes in the new /etc/vmsyslog.conf file. Hence, while upgrading multiple ESXi 4.x servers to ESXi 5.x servers, you need to manually configure the Syslog.global.logDir directory each time after the upgrade is complete.

    This issue is resolved in this release by updating the attributes in the following ways:
    1. The loghost attribute in the /etc/syslog.conf file is retained in the new /etc/vmsyslog.conf file.
    2. The logfile attribute is no longer valid now. This attribute is migrated to logdir attribute in the new /etc/vmsyslog.conf file. The value of logdir attribute is the directory name of the old logfile attribute value. The migration only happens when the directory is still a valid directory on the upgraded system.

  • Attempts to upgrade ESXi host in a HA cluster might fail with vCenter Update Manager
    Upgrading an ESXi host in a High Availability (HA) cluster might fail with an error message similar to the following with vCenter Update Manager (VUM):
    the host returned esx update error code 7
    This issue occurs when multiple staging operations are performed with different baselines in Update Manager.

    This issue is resolved in this release.
  • Scripted ESXi installation or upgrade from an attached USB drive might fail if the file system type on any of the USB drive is not Fat 16 or Fat 32
    If multiple USB flash drives are attached to an ESXi host, scripted ESXi installation or upgrade by using the ks=usb boot option might fail with an exception error if the file system type on any of the the USB drives with MS-DOS partitioning is not fat16 or fat32.

    This issue is resolved in this release.

vCenter Server and vSphere Client

  • The Summary tab might display incorrect values for provisioned space values for virtual machines and NFS or NAS Datastores on VAAI enabled hosts
    When a virtual disk with Thick Provision Lazy zeroed format is created on a VAAI supported NAS in a VAAI enabled ESXi host, the provisioned space for the corresponding virtual machine and datastore might be displayed incorrectly.

    This issue is resolved in this release.
  • Customized performance chart does not provide the option of displaying of virtual disk charts for virtual machine objects
    When you use the virtual disk metric to view performance charts, you only have the option of viewing the virtual disk performance charts for the available virtual disk objects.

    This release lets you view virtual disk performance charts for virtual machine objects as well. This is useful when you need to trigger alarms based on virtual disk usage by virtual machines.

Virtual Machine Management

  • Virtual machines fail to power on when their VMDK files are not accessible
    On an ESXi host, a virtual machine fails to power on if its VMDK file is not accessible and if its VMX file has disk.powerOnWithDisabledDisk set to TRUE and answer.msg.disk.notConfigured set to Yes. The following error message is displayed:
    The system cannot find the file specified.

    This issue is resolved in this release.
  • VMX file of a virtual machine is not updated after you take its quiesced snapshot
    When you take a quiesced snapshot of a virtual machine, the vmx file does not get updated until you power the virtual machine on again. The vmx configuration is outdated and it points to the original VMDK. If the virtual machine fails between the snapshot operation and the next power on, data loss occurs and the VMDK is left in an orphaned state. 

    This issue is resolved in this release.
  • Attempts to install Linux on a virtual machine might fail
    When a floppy image is attached to a virtual machine, an attempt to install Linux operating system on it might fail.
    The vmware.log file might contain entries similar to the following:

    RemoteFloppyVMX: Remote cmd uid 0 timed out.
    | vcpu-3| Caught signal 11 -- tid 115057
    | vcpu-3| SIGNAL: eip 0x1f5c2eca esp 0xcaf10da0 ebp 0xcaf10e00
    | vcpu-3| SIGNAL: eax 0x0 ebx 0x201826a0 ecx 0x593faa10 edx 0x201d63b0 esi 0x0 edi 0x593faa00
    | vcpu-3| r8 0x593faa00 r9 0x0 r10 0x1fd79f87 r11 0x293 r12 0x2022d000 r13 0x0 r14 0x0 r15 0x1fd6eba0
    | vcpu-3| Backtrace:
    | vcpu-3| Backtrace[0] 00000000caf109a0 rip=000000001f8caf9e rbx=000000001f8cad70 rbp=00000000caf109c0 r12=0000000000000000 r13=00000000caf198c8 r14=00000000caf10b50 r15=0000000000000080
    ....
    | vcpu-3| SymBacktrace[2] 00000000caf10ad0 rip=000000000038c00f
    | vcpu-3| Unexpected signal: 11.
    | vcpu-3| Writing monitor corefile "/vmfs/volumes/519f119b-e52d3cf3-6825-001999db3236/EMS/vmmcores.gz"


    This issue is resolved in this release.
  • A page fault in the virtual machine monitor results in an ESXi host failure
    A page fault in the virtual machine monitor might cause an ESXi host to fail with a purple diagnostic screen and report a page fault exception error similar to the following in vmware.log:
    2013-05-15T12:48:25.195Z| vcpu-1| W110: A core file is available in "/vmfs/volumes/5088c935-f71201bf-d750-90b11c033174/BF-TS5/vmx-zdump.000" 2013-05-15T12:48:25.196Z| vcpu-1| I120: Msg_Post: Error 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1) 2013-05-15T12:48:25.196Z| vcpu-1| I120+ vcpu-1:VMM fault 14: src=MONITOR rip=0xfffffffffc243748 regs=0xfffffffffc008e98 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5088c935-f71201bf-d750-90b11c033174/BF-TS5/vmware.log". 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support. 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86] 2013-05-15T12:48:25.196Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support". 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement. 2013-05-15T12:48:25.196Z| vcpu-1| I120:


    This issue is resolved in this release.


  • Virtual machines might fail when you perform virtual disk related operations
    Virtual Disk related I/O operations specifically those related to CDROM access retries might cause virtual machines to fail.
    The log files might contain entries similar to the following:
    <Time_Stamp> vmx| [msg.vmxaiomgr.retrycontabort.rudeunplug] The operation on file "/vmfs/volumes/16b2bd7c-1d66d7ef/VMware/VMware-VIMSetup-all-5.1.0-947939.iso" failed.
    <Time_Stamp> vmx| --> If the file resides on a remote file system, make sure that the network connection and the server where this disk resides are functioning properly. If the file resides on removable media, reattach the media.
    <Time_Stamp> vmx| --> Select Retry to attempt the operation again.
    ...
    Followed by:

    <Time_Stamp> vmx| MsgQuestion: msg.vmxaiomgr.retrycontabort.rudeunplug reply=0
    <Time_Stamp> vmx| VMXAIOMGR: Reopening /vmfs/volumes/16b2bd7c-1d66d7ef/VMware/VMware-VIMSetup-all-5.1.0-947939.iso and retrying outstanding IOs
    ...


    This issue is resolved in this release.

  • Attempts to export a virtual machine as OVF fails with a timeout error
    When you attempt to export a virtual machine in an Open Virtualization Format (OVF) if the virtual machine has large portion of empty blocks on the disk, for example 210GB or more, and uses an Ext 2 or an Ext 3 file system, the operation times out.

    This issue is resolved in this release.

vMotion and Storage vMotion

  • Incremental backup might fail with a FileFault error for QueryChangedDiskAreas after moving a VMDK to a different volume using vMotion for a CBT-enabled virtual machine
    When you enable Changed Block Tracking (CBT) on a virtual machine and perform QueryChangedDiskAreas after moving a virtual machine disk (VMDK) to a different volume by using vMotion, the incremental backup might fail with a FileFault error similar to the following:
    2012-09-04T11:56:17.846+02:00 [03616 info 'Default' opID=52dc4afb] [VpxLRO] -- ERROR task-internal-4118 -- vm-26512 -- vim.VirtualMachine.queryChangedDiskAreas: vim.fault.FileFault:
    --> Result:
    --> (vim.fault.FileFault) {
    --> dynamicType = ,
    --> faultCause = (vmodl.MethodFault) null,
    --> file = "/vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
    --> msg = "Error caused by file /vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
    --> }

    This issue occurs when a library function incorrectly reinitializes the disk change tracking facility.

    This issue is resolved in this release.
  • Storage vMotion of a virtual machine with 2TB storage might fail
    When you use Storage vMotion to migrate virtual machines with storage of 2TB ,i.e, 2 disks of 1TB, an error message similar to the following might be displayed:
    A general system error occurred: Source detected that destination failed to resume.


    The virtual machine fails to start on the destination host and an error message similar to the following is displayed:
    Error: "VMware ESX unrecoverable error: (vmx) Unexpected signal 8".

    This issue is resolved in this release.

VMware HA and Fault Tolerance

  • Failover of a virtual machine to a designated failover host might not be successful for a HA-enabled ESXi host
    When High Availability (HA) is enabled on the ESXi host and vMotion is performed, the failover of a virtual machine to a designated failover host might not be successful.
    This issue occurs when the virtual machine swap files (.vswp files) are locked and as a result the Fault Domain Manager (FDM) agents for HA do not succeed to failover the virtual machine on the designated host.

    This issue is resolved in this release.

VMware Tools

  • Virtual Machine Communication Interface driver fails to compile on Linux kernel 3.8.0-rc3+
    When you install VMware tools on Linux kernel 3.8.0-rc3+, the VMCI drivers fail to compile. Removing some Linux drivers from the kernel solves this issue.

    This issue is resolved in this release.
  • After installing VMware Tools on CentOS 6.4 32bit virtual machine the system restarts repeatedly
    After you install VMware Tools on CentOS 6.4 32bit virtual machine and reboot the virtual machine, the virtual machine restarts repeatedly. This issue happens because of the kernel incompatibility.

    This issue is resolved in this release.
  • SMVI might be unable to back up the machine after you upgrade VMware Tools
    After upgrading VMware Tools, the NetApp SnapManager for Virtual Infrastructure (SMVI) might be unable to back up the machine. Also, creating a quiesced snapshot might fail with an error message similar to the following:
    Cannot create a quiesced snapshot because the create snapshot operation exceeded the time limit for holding off I/O in the frozen virtual machine.

    This issue occurs with Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 virtual machines. The issue does not occur in Windows 2003 virtual machines.

    This issue is resolved in this release..
  • Incorrect subnet mask information might be displayed by the prefixLength property of the NetIpConfigInfoIpAddress data object on some virtual machines
    The NetIpConfigInfoIpAddress data object provides the information about a specific IP address. For some virtual machines, the prefixLength property of the NetIpConfigInfoIpAddress data object, which is used to denote the length of a generic Internet network address prefix, might display incorrect subnet mask information.
    This issue occurs when the IP address endianness attribute, which determines how bytes are ordered within computer memory, is not correct in the subnet mask calculation.
    This issue is observed in Windows Server 2008 R2 (64-bit) and Windows Server 2003 virtual machines.

    This issue is resolved in this release.
  • Default SVGA driver might cause Windows Server 2008 virtual machines to stop responding after VMware Tools installation
    After you install VMware Tools, Windows Server 2008 virtual machines might stop responding when restart operation is initiated from the system login page. This issue occurs when the default settings of the SVGA drivers that are installed with VMware Tools are not proper. The virtual machines might also stop responding if you move the mouse and press any key during the restart process.

    This issue is resolved in this release.
  • When you install VMware Tools using Operating System Specific packages the /tmp/vmware-root directory fills up with vmware-db.pl.* files
  • When you install VMware Tools using OSPs ’ you might see an increase in the number of log files in the /tmp/vmware-root directory.This issue is observed on SUSE Linux Enterprise Server 11 Service Pack 2 and Red Hat Enterprise Linux Server 6.

    This issue is resolved in this release.
  • Attempts to install VMware Tools might fail with Linux kernel version 3.7 and above
    VMware Tools drivers are not compiled as the VMware Tools installation scripts are unable to identify the new kernel header path with Linux kernel version 3.7 and later, and version 3.8. This might cause attempts to install VMware Tools to fail.

    This issue is resolved in this release.
  • VMware Tools service fails when the IntallPath registry key is missing
    During uninstallation of VMware Tools, it is observed that the vmusr process might fail. This happens because the uninstall process starts without waiting for vmusr process to finish. More specifically, the uninstall process deletes a registry key which the vmusr process later tries to read, leading to a VMware Tools service failure.

    This issue is resolved in this release.
  • vCenter protect agent displays a warning message for unsigned executable in a VMware tools update
    When you attempt a VMware tools update on a VMware Workstation, the vCenter protect agent displays a pop message indicating the use of an unsigned executable. Including the file in the signed form resolves this issue.

    This issue is resolved in this release.
  • Unable to display names and descriptions of the VM Processor or the VM Memory performance counters on Windows Vista or later guest operating systems
    When remote performance log is configured on guest operating systems like Windows Vista or later running as an administrative user, the names and descriptions of the VM Processor and VM Memory counters might not be displayed in the Windows Performance Monitor (perfmon) console.
    This happens when the locale for the Windows guest operating system is different from en_us or de. This issue occurs with VMware Tools version 8.3.1.2.

    This issue is resolved in this release.
  • UEK2-200 and UEK-400 PBMs might be missing from Linux Tools installer for Oracle 5.x kernels
    UEK2-200 and UEK-400 PBMs might be missing from Linux Tools installer for Oracle 5.9 2.6.39-200/400 kernels.

    This issue is resolved in this release.
  • VMware Tools is updated to provide pre-built modules for for SUSE Linux Enterprise Server 11 SP3 and Oracle Linux 6.4

Known Issues

The following known issues have been discovered through rigorous testing and will help you understand some behavior you might encounter in this release. This list of issues pertains to this release of ESXi 5.0 Update 3, ESXi 5.0 Update 2, ESXi 5.0 Update 1 and ESXi 5.0 only. Some known issues from previous releases might also apply to this release. If you encounter an issue that is not listed in this known issues list, you can review the known issues from previous releases, search the VMware Knowledge Base, or let us know by providing feedback. Known issues not previously documented are marked with the * symbol.

Known Issues List

The issues are grouped as follows.

Installation
  • Extraneous networking-related warning message is displayed after ESXi 5.0 Update 3 is installed
    After you install ESXi 5.0 Update 3, the ESXi Direct Console User Interface (DCUI) displays a warning message similar to the following:

    Warning: DHCP look up failed. You may be unable to access this system until you customize its network configuration


    However, the host acquires DHCP IP and can ping other hosts.

    Workaround: This error message is benign and can be ignored. The error message disappears if you press Enter on the keyboard.

  • In scripted installations, the ESXi installer does not apply the --fstype< option for the part command in the ks file
    The --fstype option for the part command is deprecated in ESXi 5.0. The installer accepts the --fstype option without displaying an error message, but does not create the partition type specified by the --fstype option. By default, the installer always creates VMFS5 partitions in ESXi 5.0. You cannot specify a different partition type with the --fstype option for the part command.

  • Rules information missing when running Image Builder cmdlet to display modified image profile in PowerShell 1.0
    After you install vSphere PowerCLI on Microsoft PowerShell 1.0 and add an OEM software package to an image profile, when you list the image profile, information about the Rules property is missing.

    Workaround: Access the rules information by viewing Rules property of the image profile object.

  • Scripted ESXi installation or upgrade from CD or DVD fails unless the boot line command uses uppercase characters for the script file name
    When you perform a scripted installation or upgrade from an ESXi 5.0 installer ISO written to CD or DVD along with the installation or upgrade script (kickstart file), the installer recognizes the kickstart file name only in uppercase, even if the file was named in lowercase. For example, if the kickstart file is named ks.cfg, and you use the ks=cdrom:/ks.cfg boot line command to specify the kickstart file location, the installation fails with an error message similar to HandledError: Error (see log for more info): cannot find kickstart file on cd-rom with path -- /ks.cfg.

    Workaround: Use uppercase for the kickstart file name in the boot line command to specify the kickstart file, for example: ks=cdrom:/KS.CFG

  • Misleading error message when you attempt to install VIBs and you use a relative path
    While attempting to install a depot, VIB, or profile by using the esxcli software vib command, if you specify a relative path, the operation fails with the error No such file or directory: '/var/log/vmware/a.vib'

    Workaround: Specify the absolute path when you perform the installation.

  • After you install the vSphere Web Client, a browser opens and displays a blank page
    After you install the vSphere Client, a browser opens and displays a blank page when you click Finish in the installation wizard. The page remains blank and the browser does not connect to the vSphere Administration application.

    Workaround: Close the browser and start the vSphere Administration Application page from the Start menu.

Upgrade

  • Cannot apply ESXi 5.0 Update 3 VIBs through PowerCLI on an ESXi host connected through vCenter Server
    On an ESXi 5.0 host managed by vCenter Server, attempts to apply ESXi 5.0 Update 3 VIBs by using GET-ESXCLI commands on PowerCLI fails with error messages similar to the following:
    2011-11-18T09:53:50Z esxupdate: root: ERROR: Traceback (most recent call last):
    2011-11-18T09:53:50Z esxupdate: root: ERROR: File "/usr/lib/vmware/esxcli-software", line 441, in <module>
    2011-11-18T09:53:50Z esxupdate: root: ERROR: main()
    2011-11-18T09:53:50Z esxupdate: root: ERROR: File "/usr/lib/vmware/esxcli-software", line 432, in main
    2011-11-18T09:53:50Z esxupdate: root: ERROR: ret = CMDTABLE[command](options)
    2011-11-18T09:53:50Z esxupdate: root: ERROR: File "/usr/lib/vmware/esxcli-software", line 329, in VibInstallCmd
    2011-11-18T09:53:50Z esxupdate: root: ERROR: raise Exception("No VIBs specified with -n/--vibname or -v/--viburl.")
    2011-11-18T09:53:50Z esxupdate: root: ERROR: Exception: No VIBs specified with -n/--vibname or -v/--viburl.


    Workaround: None
  • A live update with ESXCLI fails with a VibDownloadError message
    When you perform the following tasks, in sequence, the reboot required transaction fails and a VibDownloadError message appears.

    1. A live install update using the esxcli software profile update or esxcli vib update command.
    2. Before reboot, you perform a transaction that requires a reboot, and the transaction does not complete successfully. One common possible failure is signature verification, which can only be checked after the VIB is downloaded.
    3. Without rebooting the host, you attempt to perform another transaction that requires a reboot. The transaction fails with a VibDownloadError message.

    Workaround: Perform the following steps to resolve the problem.

    1. Reboot the ESXi host to clean up its state.
    2. Repeat the live install.
       
  • During scripted upgrades from ESX/ESXi 4.x to ESXi 5.0, MPX and VML disk device names change, which might cause the upgrade to fail
    MPX and VML disk device names might not persist after a host reboot. If the names change after reboot in a scripted upgrade, the upgrade might be interrupted.

    Workaround: When possible, use Network Address Authority Identifiers (NAA IDs) for disk devices. For machines that do not have disks with NAA IDS, such as Hewlett Packard machines with CCISS controllers, perform the upgrade from a CD or DVD containing the ESXi installer ISO. Alternatively, in a scripted upgrade, you can specify the ESX or ESXi instance to upgrade by using the upgrade command with the --firstdisk= parameter. Installation and upgrade script commands are documented in the vSphere Installation and Setup and vSphere Upgrade documentation.

  • ESX console and esxi_install.log report fails to acquire a DHCP address during upgrade when the ESX system uses manually assigned IP addresses on a subnet without DHCP service
    This situation occurs when an ESX system that has manually assigned IP addresses is run on a subnet that does not have a DHCP server or when the DHCP server is out of capacity. In either case, when the ESX system is upgraded, the system will pause for up to one minute attempting to fetch an IPv4 address from a DHCP server.

    Workaround: None. After the system pauses for up to one minute, it will continue to the successful completion of the upgrade. The system might display a prompt to press Enter to continue. You can either press Enter or ignore the prompt. In either case, the system will proceed with the upgrade after the pause.

Networking

  • Emulex be2net network adapter with device id 0710 fail to be probed in ESXi 5.0 (KB 2041665)

  • A host profile with vDS configuration created from vCenter Server 4.x might fail to apply gateway settings for stateless booted ESX 5.0 hosts
    While using vSphere Auto Deploy with a 4.x host profile, gateway settings might not be applied for stateless booted ESX 5.0 hosts. As a result, these hosts lose network connectivity and will not be automatically added to vCenter Server 5.0.

    Workaround: To use a 4.x host profile with vDS configuration to boot up stateless ESX 5.0 hosts, perform the following steps:

    1. Boot an ESX 5.0 stateless host configured using Auto Deploy without specifying a 4.x host profile.
    2. Apply the 4.x host profile after the stateless host is added to vCenter Server 5.0.
    3. Create a new host profile from the stateless ESX 5.0 booted host. Use the newly created host profile with vSphere Auto Deploy to boot up stateless ESX 5.0 hosts.

    To boot up ESX 5.0 stateless hosts without vDS configuration, use a 4.x host profile that does not contain vDS settings. You can also disable or remove vDS settings from a 4.x host profile as follows:

    Disable vDS and its associated port groups from the host profile using the Enable/Disable Profile Configuration option at Host Profile > Networking Configuration. Remove vDS and its associated port groups from the host profile using the Edit Profile option at Host Profile > Networking Configuration.

  • Service Console details do not appear in Add Host to vSphere distributed wizard and Manage Hosts wizard
    When adding a 4.x ESX host to a distributed switch, the details of the Service Console network adapters do not appear in the Add Host wizard on the Network Connectivity page in the Virtual Adapter details section. Typically the MAC address, IP address, and Subnet Mask should appear here.

    Workaround: To view the details of a Service Console network adapter, exit the Add Hosts or Manage Hosts wizard, then navigate to Host > Configuration > Networking in the vSphere client.

    If the Service Console network adapter is deployed on a standard switch:

    1. Locate the switch.
    2. Click the "Properties..." button.
    3. Select the Service Console network adapter.
      Make a note of the name of the adapter, which appears in the VSS diagram.

    If the Service Console network adapter is deployed on a distributed switch:

    1. Navigate to the vSphere Distributed Switch tab.
    2. Locate the distributed switch and select "Manage Virtual Adapter...".
    3. Locate and select the Service Console Network adapter.

  • The ESXi Dump Collector server silently exits
    When the ESXi Dump Collector server's port is configured with an invalid value, it exits silently without an error message. Because this port is the port through which the ESXi Dump Collector server receives core dumps from ESXi hosts, these silent exits prevent ESXi host core dumps from being collected. Because error messages are not sent by ESXi Dump Collector to vCenter Server, the vSphere administrator is unaware of this problem. If not resolved, this affects supportability when failures occur on an ESXi host.

    Workaround: Select a port only from within the recommended port range to configure the ESXi Dump Collector server to avoid this failure. Using the default port is recommended.

  • Use of a 10G QLogic NIC, version QLE3142, with a nx_nic driver causes servers to stop functioning during gPXE boot
    If a 10G QLogic NIC, version QLE3142, with a nx_nic driver is used to gPXE boot into the ESXi Stateless boot configuration, the ESXi server stops functioning and fails to boot.

    Workaround: Use other NICs for gPXE boot.

  • Enabling more than 16 VMkernel network adapters causes vSphere vMotion to fail
    vSphere 5.0 has a limit of 16 VMkernel network adapters enabled for vMotion per host. If you enable more than 16 VMkernel network adapters for vMotion on a given host, vMotion to or from that host might fail. The error message says refusing request to initialize 17 stream ip entries, where the number indicates how many VMkernel network adapters you have enabled for vMotion.

    Workaround: Disable vMotion VMkernel network adapters until only 16, at most, are enabled for vMotion.

  • Network I/O control overrides 802.1p tags in outgoing packets in ESXi 5.0
    In ESXi 5.0, the network I/O control bridges the gap between virtual networking Quality of Service (QoS) and physical networking QoS, allowing you to specify an 802.1p tag on a per-resource pool basis.
    A side effect of this functionality is that each resource pool is tagged with a default 802.1p tag (0), even if the tag has not been explicitly set. Class of Service (CoS) bit tagging inside a virtual machine is overridden when leaving the ESXi host if network I/O control is enabled.

    Workaround: None. You can choose to not use network I/O control.

  • IPv6-only VMkernel network adapter configurations not supported in the host profile
    When you use the vSphere Client to set IP configurations for a host profile, you are allowed to set IPv4-only, IPv6-only, or a mix of IPv4 and IPv6 settings for VMkernel network adapters. IPv6-only settings, however, are not supported in host profiles. If you configure VMkernel network adapters with IPv6-only settings, you are asked to provide IPv4 configurations in the host profile answer file.

    Workaround: Perform one of the following tasks:

    • Use only IPv6 settings to configure your VMkernel network adapters through the vSphere Client, and do not use a host profile.
    • Include both IPv6 and IPv4 configurations for VMkernel network adapters when creating and applying the host profile, then disable the IPv4 configurations for the VMkernel network adapters after applying the profile.

  • Some Cisco switches drop packets with priority bit set
    VMware vSphere Network I/O Control allows you to tag outgoing traffic with 802.1p tags. However, some Cisco switches (4948 and 6509) drop the packets if the tagged packets are sent on the native VLAN (VLAN 0).

    Workaround: None.

  • Significant delay in the ESXi boot process during VLAN configuration and driver loading
    ESXi hosts with BE2 or BE3 interfaces encounter a significant delay during driver loading and VLAN configuration. The length of the delay increases with the number of BE2 and BE3 interfaces on the host and can last for several minutes.

    Workaround: None.

  • Adding a network resource pool to a vSphere Distributed Switch fails with the error Cannot complete a vSphere Distributed Switch operation for one or more host members
    This error message indicates that one or more of the hosts on the distributed switch is already associated with the maximum number of network resource pools. The maximum number of network resource pools allowed on a host is 56.

    Workaround: None.

  • Adding a network resource pool to a vSphere Distributed Switch fails with the error vim.fault.LimitExceeded
    This error message indicates that the distributed switch already has the maximum number of network resource pools. The maximum number for network resource pools on a vSphere Distributed Switch is 56.

    Workaround: None.

  • LLDP does not display system names for extreme switches
    By default, system names on extreme switches are not advertised. Unless a system name is explicitly set to advertise on the extreme switch, LLDP cannot display this information.

    Workaround: Run the configure lldp ports <port ID> advertise system-name command to advertise the system name on the extreme switch.

  • Truncating mirrored packets causes ESXi to fail
    When a mirrored packet is longer than the mirrored packet length set for the port mirroring session, ESXi fails. Other operations that truncate packets might also cause ESXi to fail.

    Workaround: Do not set a mirrored packet length for a port mirroring session.

  • Fault Tolerance is not compatible with vSphere DirectPath I/O with vSphere vMotion
    When Fault Tolerance is enabled on a virtual machine, DirectPath I/O with vMotion is inactive for all virtual adapters on the virtual machine.

    Workaround: Disable Fault Tolerance and reboot the virtual machine before enabling DirectPath I/O with vMotion.

  • vSphere DirectPath I/O With vSphere vMotion is disabled by VMCI-based applications
    When you use any VMCI-based application on a Cisco UCS system, DirectPath becomes inactive on all virtual machine network adapters.

    Workaround: Stop using all VMCI-based applications and reboot the virtual machine to restore vSphere DirectPath I/O.

Storage
  • I/O latency threshold appears to be 15ms after you disable I/O metrics for a datastore cluster
    After you disable I/O metrics for a datastore cluster, the Summary page for the datastore cluster continues to display an I/O latency threshold value of 15ms (the default).

    Workaround: None. To view the correct value, select Datastore Cluster > Storage.

  • Link to enter SDRS maintenance mode appears on Summary page of standalone datastore
    Only datastores that are a part of a datastore cluster can successfully enter Storage DRS maintenance mode. However, a link to enter Storage DRS maintenance mode appears on the Summary page for a datastore that is not in a datastore cluster. When you click Enter SDRS maintenance mode for a standalone datastore, the datastore attempts to enter maintenance mode and the task appears to be pending indefinitely.

    Workaround: Cancel the Enter SDRS Maintenance Mode task in the Recent Tasks pane of the vSphere Client.

  • An all paths down (APD) condition during Storage vMotion can result in communication failure between vCenter Server and ESXi host
    If an APD condition occurs when you migrate virtual machines using Storage vMotion, vCenter Server disconnects the host involved in Storage vMotion from the vCenter Server inventory. This condition persists until the background Storage vMotion operation completes. This action could take a few minutes or hours depending on the Storage vMotion operation time. During this time, no other operation can be performed for that particular host from vCenter Server.

    Workaround: None. After the Storage vMotion operation completes, vCenter Server reconnects the host back to the inventory. None of the running virtual machines on non-APD datastores are affected by this failure.

  • Symbolic links added to a datastore might cause the Datastore Browser to incorrectly display datastore contents
    When you add symbolic links at the top level of a datastore, either externally in an NFS server or by logging in to the host, you might not be able to see the correct datastore information, such as its files and folders, when you browse the datastore. Symbolic links referencing incorrect files and folders might cause this problem.

    Workaround: Remove the symbolic links. Do not use symbolic links in datastores.

  • Attempts to add an extent to an ATS-capable VMFS datastore fail
    You can only span an ATS-capable datastore over an ATS-capable device. If you select the device that does not support ATS to extend the ATS-capable datastore, the operation fails. The vSphere Client displays the An error occurred during host configuration message. In the log file, you might also find the following error message: Operation failed, unable to add extent to filesystem.

    Workaround: Before adding an extent to an ATS datastore, verify whether the extent device supports ATS by running the following command:
    esxcli storage core device vaai status get -d=device_ID
    The output must display the following information:
    ATS Status: supported

  • Storage DRS might not behave as expected when balancing I/O load
    When you use IOMeter software to generate I/O load to test Storage DRS, the IOMeter populates the files with only zeros by default. This data does not contain random patterns of ones and zeros, which are present in real data and which are required by Storage DRS to determine the I/O characteristics and performance of the datastore.

    Workaround: When you test Storage DRS load balancing, use real data to populate at least 20 percent of the storage space on the datastore. If you use IOMeter software to generate I/O load, choose a version that allows you to write random patterns of ones and zeros to your files.

  • Names of new virtual machine disks do not appear in Storage DRS initial placement recommendations
    When creating, cloning, or deploying from template a virtual machine on a Storage DRS-enabled datastore cluster, the placement recommendations or faults dialog box does not list the names of the new virtual machine hard disks. The dialog box displays Place new virtual machine hard disk on <datastore name>.

    Workaround: None. When virtual machines are being created, hard disk names are not assigned until the disks are placed. If the virtual machine hard disks are of different size and are placed on different datastores, you can use the Space Utilization before and after statistics to estimate which disk is placed on which datastore.

  • Storage DRS appears to be disabled when you use the Scheduled Task wizard to create or clone a virtual machine
    When you create a scheduled task to clone or create a virtual machine, and select a datastore cluster as the destination storage for the virtual machine files, the Disable Storage DRS check box is always selected. You cannot deselect the Disable Storage DRS check box for the virtual machine in the Scheduled Task wizard.

    Workaround: None. The Disable Storage DRS check box is always selected in the Scheduled Task wizard. However, after the Scheduled Task runs and the virtual machine is created, the automation level of the virtual machine is the same as the default automation level of the datastore cluster.

  • vSphere Client displays an error when you attempt to unmount an NFS datastore with Storage I/O Control enabled
    If you enable Storage I/O Control for an NFS datastore, you cannot unmount that datastore. The following error message appears: The resource is in use.

    Workaround: Before you attempt to unmount the datastore, disable Storage I/O Control.

  • ESXi cannot distinguish between thick provision lazy zeroed and thick provision eager zeroed virtual disks on NFS datastores with Hardware Acceleration support
    When you use NFS datastores that support Hardware Acceleration, the vSphere Client allows you to create virtual disks in Thick Provision Lazy Zeroed (zeroedthick) or Thick Provision Eager Zeroed (eagerzeroedthick) format. However, when you check the disk type on the Virtual Machine Properties dialog box, the Disk Provisioning section always shows Thick Provision Eager Zeroed as the disk format no matter which format you selected during the disk creation. ESXi does not distinguish between lazy zeroed and eager zeroed virtual disks on NFS datastores.

    Workaround: None.

  • After migration, the mode of an IDE RDM disk in physical compatibility does not change to Independent Persistent
    The mode of the IDE RDM disk in physical compatibility does not change to Independent Persistent after you migrate the virtual machine with the disk from the ESX/ESXi 4.x host to ESXi 5.0.

    Workaround: After migration, use the vSphere Client to change the disk's mode to Independent Persistent.

  • Attempts to add a virtual compatibility RDM with a child disk to an existing virtual machine fail
    If you try to add a virtual compatibility RDM that has a child disk to an existing virtual machine, the operation fails. The vSphere Client displays the following error message: Reconfigure failed: vim.fault.DeviceUnsupportedForVmPlatform.

    Workaround: Remove a child disk to be able to add a virtual compatibility RDM.

  • With software FCoE enabled, attempts to display Storage Maps fail with an error message
    This problem affects only those ESXi hosts that have been added to vCenter Server without any previous software FCoE configuration. After you enable software FCoE adapters on these hosts, attempts to display Storage Maps in the vSphere Client fail. The following error message appears: An internal error has occurred: Failed to serialize response.

    Workaround: Configure software FCoE on the ESXi host first, and then add the host to vCenter Server.

  • An NFS datastore with adequate space displays out of space errors
    This problem happens only when you use Remote Procedure Calls (RPC) client sharing and mount multiple NFS volumes from the same NFS server IP address. In this configuration, when one of the NFS volumes runs out of space, other NFS volumes that share the same RPC client might also report no space errors.

    Workaround: Disable RPC client sharing on your host by performing this task:

    1. In the vSphere Client inventory panel, select the host.
    2. Click the Configuration tab, and click Advanced Settings under Software.
    3. Click NFS in the left panel and scroll down to NFS.MaxConnPerIP on the right.
    4. Change the default value to 128.

  • After reboot, stateless host cannot detect iSCSI datastores
    If a stateless host is added to the Cisco Nexus 1000V Series Switch and configured with MTU of 9000, then after reboot, the host cannot detect iSCSI datastores even though it is able to discover the corresponding devices.

    Workaround: To make the datastores visible, click Refresh on the Configuration > Storage screen of the vSphere Client.

Server Configuration
  • A change SATP-PSP to rule for host profiles applied to ESXi host is not reflected in the host after reboot
    After the SAN Array Type Plugin Path Selection Policy (SATP PSP) rule and applying the change and rebooting a host provisioned with Auto Deploy, this new change is not reflected in the SATP PSP for each of the devices. For ESXi hosts not provisioned with Auto Deploy, the SATP PSP change is correctly updated in the host. However, a compliance check of the ESXi host fails compliance check with the host profile.

    Workaround: After applying the host profile to the ESXi host, delete the host profile and extract a new host profile from the ESXi host, then attach it to the host before rebooting. To do this, use the Update from Reference Host feature in the Host Profiles UI. This task deletes the host profile and extracts a new profile from the host while maintaining all the current attachments.

    Use the esxcli command to edit the SATP PSP on the host itself before you extract the host profile. Do not use the host profile editor to edit the SATP PSP.

  • Applying a host profile with a service startup policy of off does not disable the service
    A host profile is created using as a reference host an ESXi host configured with some services disabled and is applied to a host with those services enabled. The host profile application process does not disable the services on the target ESXi host. This situation is commonly encountered by users who have enabled the ESXShell or SSH services on the target ESXi hosts through the Security Profile in the vSphere Client or Troubleshooting Options in the DCUI.

    Workaround: The reboot process disables the services. You can also manually stop the services in the vSphere Client by configuring the host. Perform this procedure for each service.

    1. Select the host in the inventory.
    2. Click the Configuration tab.
    3. Click Security Profile in the Software section.
    4. Click Properties and select the service.
    5. Click Options.
    6. Click Stop and click OK.

  • Host profile answer file status is not updated when switching the attached profile of the host
    When attaching a host profile to a host that was previously attached to another host profile, the answer file status is not updated. If the answer file status is Completed, after attaching another host profile to the host, the answer file status in the host profile view still appears as Completed. The actual status, however, might be changed to Incomplete.

    Workaround: Manually update the answer file status after attaching a host profile.

    1. In vSphere Client, select the newly attached profile in the Host Profiles inventory view.
    2. Click the Hosts and Clusters tab.
    3. Right-click the host from the Entity Name list and select Check Answer File.

    The host profile answer file status is updated.

  • Manually applying a host profile containing a large configuration might timeout
    Applying a host profile that contains a large configuration, for example, a very large number of vSwitches and port groups, might timeout if the target host is either not configured or only partially configured. In such cases, the user sees the Cannot apply the host configuration error message in the vSphere Client, although the underlying process on ESXi that is applying the configuration might continue to run.

    In addition, syslog.log or other log files might have error messages such as the following message:
    Error interacting with configuration file /etc/vmware/esx.conf: Timeout while waiting for lock, /etc/vmware/esx.conf.LOCK, to be released. Another process has kept this file locked for more than 20 seconds. The process currently holding the lock is hostd-worker(5055). This is likely a temporary condition. Please try your operation again.

    This error is caused by contention on the system while multiple operations attempt to gather system configuration information while the host profiles apply operation sets configuration. Because of these errors and other timeout-related errors, even after the host profiles apply operation completes on the system, the configuration captured in the host profile might not be fully applied. Checking the host for compliance shows which parts of the configuration failed to apply, and perform an Apply operation to fix those remaining non-compliance issues.

    Workaround: Perform one of the following:

    • ESXi hosts not provisioned with Auto Deploy

      1. Increase the timeout value for the apply operation by adding the following entry in the /etc/vmware/hostd/cmdMo.xml file:

        <managedObject id="2">
        <type> vim.profile.host.profileEngine.HostProfileManager </type>
        <moId> ha-hostprofileengine-hostprofilemanager </moId>
        --> <timeOutInSeconds> xxxx </timeOutInSeconds> <--****
        <version> vim.version.dev </version>
        <cmd> /usr/bin/sh </cmd>
        <arg id="1"> -l </arg>
        <arg id="2"> -c </arg>
        <arg id="3"> /usr/lib/vmware/hostd/hmo/hostProfileEngine.py --cgi </arg>
        </managedObject>


        Where xxxx is the timeout value in seconds. By default, the apply operation times out in 10 minutes. This entry lets you set a longer timeout. For example, a value of 3600 increases the timeout to 1 hour. The value you enter might vary depending on the specific host profile configuration. After you set a high enough value, the apply operation timeout error no longer appears and the task is visible in the vSphere Client until it is complete.
      2. Restart hostd.
    • Hosts provisioned with Auto Deploy

      1. Reboot the ESXi host provisioned with Auto Deploy.
      2. For ESXi hosts provisioned with Auto Deploy, ensure that the answer file is complete by performing the Update Answer File operation on the ESXi host and then rebooting.

        The configuration in the host profile and answer file is applied on the system during initialization. Large configurations might take longer to boot, but it can be significantly faster than manually applying the host profile through the vSphere client.
  • Host Profiles compliance check fails for reference host with newly created profile
    A compliance check for a newly configured host profile for example, configured with iSCSI, might fail if the answer file is not updated before checking compliance.

    Workaround: Update the answer file for the profile before performing a compliance check.

  • A host profile fails to apply if syslog logdir set to datastore without a path
    If the esxcli command or vSphere Client is used to set the syslog directory to a datastore without an additional path, a host profile extracted from that system fails to apply to other hosts.

    For example, the following configures the system in a way that triggers this condition:
    esxcli system syslog config set --logdir /vmfs/volumes/datastore1

    Similarly, setting the Syslog.global.logDir to datastore1 in the Advanced Settings dialog of the host's Configuration tab also triggers this condition.

    Workaround:Perform one of the following:

    • Modify Syslog.global.logDir in the Advanced Settings dialog to have a value of "DATASTORE_NAME /" instead of "DATASTORE_NAME" before extracting the host profile.
    • Edit the host profile such that the Advanced configuration option for Syslog.global.logDir has a value of "DATASTORE_NAME /" instead of "DATASTORE_NAME".

  • Applying a host profile might recreate vSwitches and portgroups
    The vSwitches and portgroups of a host might be recreated when a host profile is applied. This can occur even if the host is compliant with the host profile.

    This occurs when the portgroupprofile policy options are set to use the default values. This setting leads to an issue where the comparison between the profile and the host configuration might incorrectly fail when the profile is applied. At this time, the compliance check passes. The comparison failure causes the apply profile action to recreate the vSwitches and portgroups. This affects all subprofiles in portgroupprofile.

    Workaround: Change the profile settings to match the desired settings instead of selecting to use the default.

  • VMware embedded SNMP agent reports incorrect software installed date through hrSWInstalledTable from HOST-RESOURCES-MIB
    When you poll for installed software (hrSWInstalledTable RFC 2790) by using the VMware embedded SNMP agent, the installation date shown for user-installed software is not correct, because the hrSWInstalledTable from HOST-RESOURCES-MIB reports hrSWInstalledDate incorrectly.

    Workaround: To retrieve the correct installation date, use the esxcli command esxcli software vib list.

vCenter Server and vSphere Client

  • Unknown device status in the vCenter Server Hardware Status tab
    In the Sensors view of the vCenter Server Hardware Status tab, the status for some PCI devices is displayed as Unknown Unknown #<number>. The latest PCI IDs for some devices are not listed in the /usr/share/hwdata/pci.ids file on the ESXi host. vCenter Server lists devices with missing IDs as unknown.

    The unknown status is not critical and the list of PCI IDs is updated regularly in major vSphere releases.

  • Database error while reregistering AutoDeploy on vCenter Virtual Appliance (VCVA) (KB 2014087).
  • The snmpwalk command returns an error message when you run it without using -t option
    When the snmpwalk command is run without using the -t and -r options for polling SNMP data, the VMware embedded SNMP agent does not show complete data and displays the error message, No response from host.

    Workaround: When you run the snmpwalk command, use the -t option to specify the timeout interval and the -r option to set the number of retries. For example: snmpwalk -m all -c public -v1 host-name -r 2 -t 10 variable-name.

  • The vCLI command to clear the embedded SNMP agent configuration resets the source of indications and clears trap filters
    The vicfg-snmp -r vCLI command to clear the embedded SNMP agent configuration resets the source of events or traps to the default, indications, and clears all trap filters.

    Workaround: None.

  • Enabling the embedded SNMP agent fails with Address already in use error
    When you configure a port other than udp/161 for the embedded SNMP agent while the agent is not enabled, the agent does not check whether the port is in use. This might result in a port conflict when you enable the agent, producing an Address already in use error message.

    Workaround: Enable the embedded SNMP agent before configuring the port.

Virtual Machine Management

  • Mouse pointer might be unable to move out of Windows Server 2012 R2 and Windows 8.1 virtual machine after VMware Tools is installed*
    When you install VMware Tools on Windows Server 2012 R2 and Winsows 8.1 virtual machine, the mouse pointer might be unable to move out of the virtual machine. This issue occurs when you configure USB 2.0 Controller before installing VMware Tools.

    Workaround: Set the following configuration option in the .vmx file:
    mouse.vusb.enable = "FALSE"

  • Driver availability for xHCI controller for USB 3.0 devices
    Virtual hardware version 8 includes support for the xHCI controller and USB 3.0 devices. However, an xHCI driver might not be available for many operating systems. Without a driver installed in the guest operating system, you cannot use USB 3.0 devices. No drivers are known to be available for the Windows operating system at this time. Contact the operating system vendor for availability of a driver. When you create or upgrade virtual machines with Windows guest operating systems, you can continue using the existing EHCI+UHCI controller, which supports USB 1.1 and 2.0 devices for USB configuration from an ESXi host or a client computer to a virtual machine. If your Windows virtual machine has xHCI and EHCI+UHCI USB controllers, newly added USB 1.1 and USB 2.0 devices will be connected to xHCI and will not be detected by the guest.

    Workaround: Remove the xHCI controller from the virtual machine's configuration to connect USB devices to EHCI+UHCI.

  • Linux kernels earlier than 2.6.27 do not report nonpowers of 2 cores per socket
    Beginning with ESXi 5.0, multicore virtual CPU support allows nonpowers of 2 cores per socket values. Linux kernels earlier than 2.6.27 report only powers of 2 values for cores per socket correctly. For example, some Linux Guest operating systems might not report any physical ID information when you set numvcpus = 6 and cpuid.coresPerSocket = 3 in the .vmx file. Linux kernels 2.6.28 and later report the CPU and core topology correctly.

    Workaround: None

  • When you hot add memory to virtual machines with Linux 64 bit, Windows 7 or Windows 8, 32-bit guest operating systems, you cannot increase existing virtual memory to more than 3GB
    The following conditions apply to hot adding memory to virtual machines with Linux 64 bit, Windows 7 and Windows 8 32 bit guest operating systems.

    • If the powered-on virtual machine has less than 3GB of memory, you cannot hot add memory in excess of 3GB.
    • If the virtual machine has 1GB, you can add 2GBs.
    • If the virtual machine has 2GB, you can add 1GB.
    • If the virtual machine has 3444MB of memory, you can add 128MB.
    • If the powered-on virtual machine has exactly 3GB memory, you cannot hot add any memory.

    If the powered-on virtual machine has more than 3GB of memory, you can increase the virtual machine memory to 16 times the initial virtual machine power-on size or to the hardware version limit, whichever is smaller. The hardware version limit is 255GB for hardware version 7 and 1011GB for hardware version 8.

    Linux 64 bit and 32 bit Windows 7 and Windows 8 guest operating systems freeze when memory grows from less than or equal to 3GB to greater than 3GB while the virtual machine is powered on. This vSphere restriction ensures that you do not trigger this bug in the guest operating system.

    Workaround: None.

  • CPU hot add error on hardware version 7 virtual machines
    Virtual CPU hot add is supported with the multicore virtual CPU feature for hardware 8 virtual machines.
    For hardware version 7 virtual machines with cores per socket greater than 1, when you enable CPU hot add in the Virtual Machine Properties dialog box and try to hot add virtual CPUs, the operation fails and a CPU hot plug not supported for this virtual machine error message appears.

    Workaround: To use the CPU hot-add feature with hardware version 7 virtual machines, power off the virtual machine and set the number of cores per socket to 1.
    For best results, use hardware version 8 virtual machines.

  • Hot-adding memory to a Windows 2003 32-bit system that uses the November 20, 2007 LSISAS driver can cause the virtual machine to stop responding
    The November 20th, 2007 LSI-SAS driver cannot correctly address memory above 3GB if such memory is not present at system startup. When you hot-add memory to a system that has less than 3GB of memory before the hot-add, but more than 3GB of memory after hot-add, the Windows state is corrupted and eventually causes Windows to stop responding.

    Workaround: Use the latest LSI SAS driver available from the LSI website. Do not use the LSISAS1068 virtual adapter for Windows 2003 virtual machines.

  • Incorrect IPv6 address appears on the Summary tab on MacOS X Server 10.6.5 and later guest operating systems
    When you click View All on the Summary tab in the vSphere Client, the list of IPv6 addresses includes an incorrect address for the link local address. You see the incorrect address when you run the ifconfig file and compare the output of the command with the list of addresses in the vSphere Client. This incorrect information also appears when you run the vim-cmd command to get the GuestInfo data.

    Workaround: None

  • Creating a large number of virtual machines simultaneously causes file operations to fail
    When you create a large number of virtual machines simultaneously that reside within the same directory, the storage system becomes overwhelmed and some file operations fail. A vim.fault.CannotAccessFile error message appears and the create virtual machine operation fails.

    Workaround: Create additional virtual machines in smaller batches of, for example, 64, or try creating virtual machines in different datastores or within different directories on the same datastore.

  • USB devices passed through from an ESXi host to a virtual machine might disconnect during migration with vMotion
    When a USB device is passed through to a virtual machine from an ESXi host and the device is configured to remain connected during migration with vMotion, the device might disconnect during the vMotion operation. The devices can also disconnect if DRS triggers a migration. When the devices disconnect, they revert to the host and are no longer connected to the virtual machine. This problem occurs more often when you migrate virtual machines that have multiple USB devices connected, but occasionally happens when one or a small number of devices are connected.

    Workaround: Migrate the virtual machine back to the ESXI host to which the USB devices are physically attached and reconnect the devices to the virtual machine.

  • A virtual machine with an inaccessible SCSI passthrough device fails to power on
    If a SCSI passthrough device attached to a virtual machine has a device backing that is inaccessible from the virtual machine's host, the virtual machine fails to power on with the error, An unexpected error was received from the ESX host while powering on VM.

    Workaround: Perform one of the following procedures:

    • If the virtual machine's host has a physical SCSI device, change the device backing of the SCSI passthrough device to the host's physical SCSI device, and power on the virtual machine.
    • If the host does not have a physical SCSI device, remove the SCSI passthrough device from the virtual machine and power it on.

  • The VMware Tools system tray icon might incorrectly show the status as Out of date
    If a virtual machine is using VMware Tools that was installed with vSphere 4.x, the system tray icon in the guest operating system incorrectly shows the status as Out-of-date. In the vSphere 5.0 Client and vSphere Web Client, the Summary tab for the virtual machine shows the status as Out-of-date (OK. This version is supported on the existing host, but upgrade if new functionality does not work). VMware Tools installed with vSphere 4.x is supported and does not strictly require an upgrade for vSphere 5.0.

    Workarounds: In vSphere 5.0, use the Summary tab for the virtual machine in the vSphere Client or the vSphere Web Client to determine the status of VMware Tools. If the status is Out-of-date and you do not want to upgrade, you can use the following settings to disable the upgrade prompts and warning icon in the guest operating system:

    • If the virtual machine is set to automatically upgrade VMware Tools but you want the virtual machine to remain at the lowest supported version, set the following property as an advanced configuration parameter: set tools.supportedOld.autoupgrade to FALSE. This setting also disables the exclamation point icon in the guest, which indicates that VMware Tools is unsupported.
    • If VMware Tools is out of date and you want to disable the exclamation mark icon that appears on the VMware Tools icon in the system tray, set the following property as an advanced configuration parameter: set tools.supportedOld.warn to FALSE.

    Neither of these settings affect the behavior of VMware Tools when the Summary tab shows the status as Unsupported or Error. In these situations, the exclamation mark icon appears and VMware Tools is automatically upgraded (if configured to do so), even when the advanced configuration settings are set to FALSE. You can set advanced configuration parameters either by editing the virtual machine's configuration file, .vmx or by using the vSphere client or the vSphere Web Client to edit the virtual machine settings. On the Options tab, select Advanced > General, and click Configuration Parameters.

  • Check and upgrade Tools during power cycling feature does not work in ESXi 5.0 and later
    In ESX/ESXi 4.1, the Check and upgrade Tools during power cycling option was available to upgrade VMware Tools when the virtual machine shut down. This feature does not work in ESXi 5.0 and later. Ignore any documentation procedures related to this feature.

    Workaround: Install VMware Tools manually.

  • Mac OS X guest operating systems with high CPU and memory usage might experience kernel panic during virtual machine suspend or resume or migration operations
    Following virtual machine suspend or resume operations or migration with vMotion on a host under a heavy CPU and memory load, the Translation Lookaside Buffer (TLB) invalidation request might time out. In such cases, the Mac OS X guest operating system stops responding and a variant of one of the following messages is written to the vmware.log file:
    The guest OS panicked. The first line of the panic report is: Panic(CPU 0): Unresponsive processor
    The guest OS panicked. The first line of the panic report is: panic(cpu 0 caller 0xffffff8000224a10): "pmap_flush_tlbs()timeout: " "cpu(s) failing to respond to interrupts,pmap=0xffffff800067d8a0 cpus_to_respond=0x4"@/SourceCache/xnu/xnu-1504.7.4/osfmk/x86_64/pmap.c:2710

    Workaround: Reduce the CPU and memory load on the host or reduce the virtual CPU count to 1.

  • Virtual machine clone or relocation operations from ESXi 5.0 to ESX/ESXi 4.1 fail if replication is enabled
    If you use the hbr enablereplication command to enable replication on a virtual machine that resides on an ESXi 5.0 host and clone the virtual machine to an ESX/ESXi 4.1 or earlier host, validation fails with an operation is not supported error message. Cloning of ESXi 5.0 virtual machines on ESX/ESXi 4.1 hosts is not supported.

    Workaround: Select one of the following workarounds:

    • Clone the virtual machine onto an ESXi 5.0 host
    • Clone or relocate a new virtual machine on an ESX/ESXi 4.1 host.
  • Cannot set VMware Tools custom scripts through VMware-Toolbox UI in Linux guest operating system when the custom scripts contain non-ASCII characters in their path
    Non-ASCII characters appear as square boxes with an X in them in the VMware Tools Properties window in Linux guest operating systems when the system locale is zh_CN.gb18030, ja_JP.eucjp, or ko_KR.euckr. In such cases, you cannot set custom VMware Tools scripts.

    Workaround: Perform one of the following tasks:

    • Change the directory name where custom VMware Tools scripts are located so that it contains ASCII characters only.
    • Set custom VMware Tools scripts by entering the vmware-toolbox-cmd script command at a shell prompt.

  • Non-ASCII DNS suffixes are not set correctly after customizing Windows XP and Windows 2003
    If you enter a non-ASCII DNS suffix in the Network Properties DNS tab when you use the Customization Specification Wizard to customize Windows XP or Windows 2003, the customization is reported as successful but the non-ASCII DNS suffix is not set correctly.

    Workaround: Set the DNS suffix manually in Windows XP and Windows 2003.

  • VMware embedded SNMP agent reports incorrect status for processors in the hrDeviceStatus object of the HOST-RESOURCES-MIB module
    When reporting the system details, the VMware embedded SNMP agent shows an incorrect status for processors. The SNMP agent reports the processor status as Unknown for the hrDeviceStatus object in HOST-RESOURCES-MIB. The ESX/net-snmp implementation of HOST-RESOURCES-MIB does not return the hrDeviceStatus object, which is equivalent to reporting an unknown status.

    Workaround: Use either CIM APIs or SMBIOS data to check the processor status.

  • The snapshot disk path in the .vmsd snapshot database file and the parent path in the delta disk descriptor file are not updated after migration
    When snapshot.redoNotWithParent is set to TRUE and you change the snapshotDirectory setting from, for example, Database A to Database B, you might see an error message that says, Detected an invalid snapshot configuration. This problem occurs when both of the following conditions exist:

    • You revert to a previous snapshot in the snapshot tree and create new snapshots from that snapshot point. The result is a nonlinear snapshot tree hierarchy.
    • The disk links in a disk chain span multiple datastores and include both the source and destination datastores. This situation occurs if you change the snapshotDirectory settings to point to different datastores more than once and take snapshots of the virtual machine between the snapshotDirectory changes. For example, you take snapshots of a virtual machine with snapshotDirectory set to Datastore A, revert to a previous snapshot, then change the snapshotDirectory settings to Datastore B and take additional snapshots. Now you migrate the virtual disk from Datastore B to Datastore A.

    The best practice is to retain the default setting, which stores the parent and child snapshots together in the snapshot directory. Avoid changing the snapshotDirectory settings or taking snapshots between datastore changes. If you set snapshot.redoNotWithParent to TRUE, perform a full storage migration to a datastore that is currently not used by the virtual machine.

    Workaround: Manually update the disk path references to the correct datastore path in the snapshot database file and disk descriptor file.

Migration
  • During Daylight Saving Time (DST) transitions, the time axis on performance charts is not updated to reflect the DST time change.
    For example, local clocks in areas that observe DST were set forward 1 hour on Sunday, March 27, 2011 at 3am. The tick markers on the time axis of performance charts should have been labeled ..., 2:00, 2:20, 2:40, 4:00, 4:20, ..., omitting ticks for the hour starting at 3am. The labels actually displayed are ..., 2:00, 2:20, 2:40, 3:00, 3:20, 3:40, 4:00, 4:20, ....

    Workaround: None

  • Virtual machine disks retain their original format after a Storage vMotion operation in which the user specifies a disk format change
    When you attempt to convert the disk format to Thick Provision Eager Zeroed during a Storage vMotion operation of a powered-on virtual machine on an host running ESX/ESXi 4.1 or earlier, the conversion does not happen. The Storage vMotion operation succeeds, but the disks continue to retain their original disk format because of an inherent limitation of ESX/ESXi 4.1 and earlier. If the same operation is performed on a virtual machine on an ESXi 5.0 host, the conversion happens correctly.

    Workaround: None.

VMware HA and Fault Tolerance
  • vSphere HA fails to restart a virtual machine that was being migrated using vMotion when a host failure occurred.
    While a virtual machine is being migrated from one host to another, the original host might fail, become unresponsive, or lose access to the datastore containing the configuration file of the virtual machine. If such a failure occurs and the vMotion subsequently also fails, vSphere HA might not restart the virtual machine and might unprotect it.

    Workaround: If the virtual machine fails and vSphere HA does not power it back on, power the virtual machine back on manually. vSphere HA then protects the virtual machine.

Guest Operating System

  • USB 3.0 devices might not work with Windows 8 or Windows Server 2012 virtual machines
    When you use USB 3.0 devices with Windows 8 or Windows Server 2012 virtual machines while using a Windows or Linux operating system as the Client, error messages similar to the following might be displayed:
    Port Reset Failed
    The USB set SEL request failed

    Workaround: None

  • PXE boot or reboot of RHEL 6 results in a blank screen
    When you attempt to use the Red Hat boot loader to PXE boot into RHEL 6 in an EFI virtual machine, the screen becomes blank after the operating system is selected. This problem is also seen if you remove the splashscreen directive from the grub.conf file in a regular installation of RHEL 6 and reboot the virtual machine.

    Workaround: Verify that the splashscreen directive is present and references a file that is accessible to the boot loader.

  • Multiple vNICs randomly disconnect upon reboot for Mac OS X 10.6.x
    When rebooting the Mac OS X 10.6, an incorrect guest link state is shown for n >=3 e1000 vNICs.

    If n(e1000)=3, the guest link state is: 2(Connected), 1 (disconnected).
    If n(e1000)=5, the guest link state is: 3(Connected), 2(disconnected), and so on.

    Workaround: Manually activate and deactivate the adapters with the ifconfig utility. The manual activation and deactivation will not persist through the next reboot process.

Supported Hardware
  • IBM x3550 M2 Force Legacy Video on Boot must be disabled
    The IBM x3550 M2 has a firmware option called Force Legacy Video on Boot that enables legacy INT10h video support when booting from the Unified Extensible Firmware Interface (UEFI). This option is not compatible with ESXi 5.0 and must be disabled.

    Workaround: When booting the IBM x3550 M2 from UEFI, press F1 to enter the firmware setup and select System Settings > Legacy Support > Force Legacy Video on Boot and click Disable.

Miscellaneous
    • Inaccurate monitoring of sensor data in the vCenter Server Hardware Status (KB 2012998).

    • Location of log files has changed from /var/log to /var/run/log
      ESXi 5.0 log files are located in /var/run/log. For backward compatibility, the log file contains links from the previous location, /var/log, to the most recent log files in the current location, /var/run/log. The log file does not contain links to rotated log files.

      Workaround: None.

    • On Linux virtual machines, you cannot install OSPs after uninstalling VMware Tools with the tar installer
      After you uninstall VMware Tools that is installed with the tar installer on a Linux virtual machine, files are left on the system. In this situation, you cannot install OSPs.

      Workaround: Run the following command: rm -rf /usr/lib/vmware-tools /etc/vmware-tools
    • When proxy server is enabled in Internet Explorer LAN settings, PowerCLI sometimes fails to add an online depot
      An online depot can be added in PowerCLI using the Add-ESXSoftwareDepot cmdlet. Under certain conditions in which the proxy server is enabled for the machine being used, PowerCLI fails to add the online depot in its session.
      This failure can occur only if all the following conditions exist.
      • The customer's site requires an HTTP proxy for accessing the Web.
      • The customer hosts a depot on their internal network.
      • The customer's proxy cannot connect to the depot on the internal network.

      Workaround:
      1. Disable proxy server in the IE LAN settings.
      2. Add the online depot in PowerCLI.