VMware

VMware ESXi 5.0 Update 2 Release Notes

ESXi 5.0 Update 2 | 20 DEC 2012 | Build 914586

Last updated: 30 MAY 2013

What's in the Release Notes

These release notes cover the following topics:

What's New

The following information describes some of the enhancements available in this release of VMware ESXi:

  • Support for additional guest operating systems. This release adds support for Solaris 11, Solaris 11.1, and Mac OS X Server Lion 10.7.5 guest operating systems.
    For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.

  • Resolved Issues. This release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 5.0

Features and known issues of ESXi 5.0 are described in the release notes for each release. To view release notes for earlier releases of ESXi 5.0, click one of the following links:

Internationalization

VMware vSphere 5.0 Update 2 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese

vSphere Client Locale Forcing Mode

With vSphere 5.0 Update 2, you can configure the VMware vSphere Client to provide the interface text in English even when the machine on which it is running is not English. You can set this configuration for the duration of a single session by supplying a command-line switch. This configuration applies to the interface text and does not affect other locale-related settings such as date and time or numeric formatting.

The following vSphere Client command causes the individual session to appear in English:

vpxClient -locale en_US

Compatibility and Installation

ESXi, vCenter Server, and vSphere Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and previous versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Client, and optional VMware products. In addition, check this site for information about supported management and backup agents before installing ESXi or vCenter Server.

The vSphere Web Client and the vSphere Client are packaged with the vCenter Server and modules ZIP file. You can install one or both clients from the VMware vCenter™ Installer wizard.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 5.0.2 adds support for ESXi 5.0 Update 2 and vCenter Server 5.0 Update 2 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.


Hardware Compatibility for ESXi

To determine which processors, storage devices, SAN arrays, and I/O devices are compatible with vSphere 5.0 Update 2, use the ESXi 5.0 Update 2 information in the VMware Compatibility Guide.

Upgrades and Installations for supported CPUs. vSphere 5.0 Update 2 supports only CPUs that have LAHF and SAHF CPU instruction sets. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5.0 Update 2. For CPU support, see the VMware Compatibility Guide.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 5.0 Update 2, use the ESXi 5.0 Update 2 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines with virtual hardware versions 4.0 and later are supported with ESXi 5.0 Update 2. Hardware version 3 is no longer supported. To use hardware version 3 virtual machines on ESXi 5.0 Update 2, upgrade virtual hardware. See the vSphere Upgrade documentation.

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for step-by-step guidance on installing and configuring ESXi and vCenter Server.

After successful installation, you must perform some licensing, networking, and security configuration. For information about these configuration tasks, see the following guides in the vSphere documentation.

 

Migrating Third-Party Solutions

ESX/ESXi hosts might contain third-party software, such as Cisco Nexus 1000V VEMs or EMC PowerPath modules. The ESXi 5.0 architecture is changed from ESX/ESXi 4.x so that customized third-party software packages (VIBs) cannot be migrated when you upgrade from ESX/ESXi 4.x to ESXi 5.0 and later.
When you upgrade a 4.x host with custom VIBs that are not in the upgrade ISO, you can proceed with the upgrade but will receive an error message listing the missing VIBs. To upgrade or migrate such hosts successfully, you must use Image Builder to create a custom ESXi ISO image that includes the missing VIBs. To upgrade without including the third-party software, use the ForceMigrate option or select the option to remove third-party software modules during the remediation process in vSphere Update Manager. For information about how to use Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation. For information about upgrading with third-party customizations, see the vSphere Upgrade and Installing and Administering VMware vSphere Update Manager documentation. For information about upgrading with vSphere Update Manager, see the vSphere Upgrade and Installing and Administering VMware vSphere Update Manager documentation.

L3-routed NFS Storage Access

vSphere 5.0 Update 2 supports L3 routed NFS storage access when your environment meets the following conditions:
  • Use Cisco's Hot Standby Router Protocol (HSRP) in IP Router. If you are using a non-Cisco router, be sure to use Virtual Router Redundancy Protocol (VRRP) instead.
  • Use Quality of Service (QoS) to prioritize NFS L3 traffic on networks with limited bandwidths, or on networks that experience congestion. See your router documentation for details.
  • Follow Routed NFS L3 best practices recommended by your storage vendor. Contact your storage vendor for details.
  • Disable Network I/O Resource Management (NetIORM)
  • If you are planning to use systems with top-of-rack switches or switch-dependent I/O device partitioning, contact your system vendor for compatibility and support.
In an L3 environment the following additional restrictions are applicable:
  • The environment does not support VMware Site Recovery Manager.
  • The environment supports only NFS protocol. Do not use other storage protocols such as FCoE over the same physical network.
  • The NFS traffic in this environment does not support IPv6.
  • The NFS traffic in this environment can be routed only over a LAN. Other environments such as WAN are not supported.
  • The environment does not support Distributed Virtual Switch (DVS).

Upgrades for This Release

For instructions about how to upgrade vCenter Server and ESXi hosts, see the vSphere Upgrade documentation.

Upgrading VMware Tools

VMware ESXi 5.0 Update 2 contains the latest version of VMware Tools. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the VMware Tools Resolved Issues for a list of issues resolved in this release of ESX related to VMware Tools.

To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).

ESX/ESXi Upgrades

You can upgrade ESX/ESXi hosts to ESXi 5.0 Update 2 in several ways.

  • vSphere Update Manager. If your site uses vCenter Server, use vSphere Update Manager to perform an orchestrated host upgrade or an orchestrated virtual machine upgrade from ESX/ESXi 4.0, 4.1, and ESXi 5.0. See the instructions in the vSphere Upgrade documentation, or for complete documentation about vSphere Update Manager, see the Installing and Administering VMware vSphere Update Manager documentation.

  • Upgrade interactively using an ESXi installer ISO image on CD-ROM or DVD. You can run the ESXi 5.0 Update 2 installer from a CD-ROM or DVD drive to perform an interactive upgrade. This method is appropriate for upgrading a small number of hosts.

  • Perform a scripted upgrade. You can upgrade or migrate from ESXi/ESX 4.x hosts to ESXi 5.0 Update 2 by running an update script, which provides an efficient, unattended upgrade. Scripted upgrades also provide an efficient way to deploy multiple hosts. You can use a script to upgrade ESXi from a CD-ROM or DVD drive, or by PXE-booting the installer.

  • ESXCLI: You can update and apply patches to ESXi 5.x hosts using the esxcli command-line utility. You cannot use esxcli to upgrade ESX/ESXi 4.x hosts to ESXi 5.0 Update2.
Supported Upgrade Paths for Upgrade to ESXi 5.0 Update 2 :

Upgrade Deliverables

Supported Upgrade Tools

Supported Upgrade Paths to ESXi 5.0 Update 2

ESX 4.0:
Includes
ESX 4.0 Update 1
ESX4.0 Update 2

ESX4.0 Update 3
ESX 4.0 Update 4

ESXi 4.0:
Includes
ESXi 4.0 Update 1
ESXi 4.0 Update 2

ESXi 4.0 Update 3
ESXi 4.0 Update 4

ESX 4.1:
Includes
ESX 4.1 Update 1
ESX 4.1 Update 2

ESX 4.1 Update 3

 

ESXi 4.1:
Includes
ESXi 4.1 Update 1

ESXi 4.1 Update 2
ESXi 4.1 Update 3

ESXi 5.0:
Includes
ESXi 5.0 Update 1

VMware-VMvisor-Installer-5.0.0.update02-914586.x86_64.iso

 

  • VMware vCenter Update Manager
  • CD Upgrade
  • Scripted Upgrade

Yes

Yes

Yes

Yes

Yes*

update-from-esxi5.0-5.0_update02.zip
  • VMware vCenter Update Manager
  • ESXCLI
  • VMware vSphere CLI

No

No

No

No

Yes

Using patch definitions downloaded from VMware portal (online) VMware vCenter Update Manager with patch baseline

No

No

No

No

Yes

* Note: Upgrade from ESXi 5.0 using VMware-VMvisor-Installer-5.0.0.update02-914586.x86_64.iso with VMware vCenter Update Manager is not supported. Instead, you must upgrade using update-from-esxi5.0-5.0_update02.zip with VMware vCenter Update Manager.

VMware vSphere SDKs

VMware vSphere provides a set of SDKs for vSphere server and guest operating system environments.

  • vSphere Management SDK. A collection of software development kits for the vSphere management programming environment. The vSphere Management SDK contains the following vSphere SDKs:

    • vSphere Web Services SDK. Includes support for new features available in ESXi 5.0 and later and vCenter Server 5.0 and later server systems. You can also use this SDK with previous versions of ESX/ESXi and vCenter Server. For more information, see the VMware vSphere Web Services SDK Documentation.

    • vSphere vCenter Storage Monitoring Service (SMS) SDK. SMS 2.0 is supported on vCenter Server 5.0. For more information, see vCenter SMS SDK Documentation.

    • vSphere ESX Agent Manager (EAM) SDK. EAM 1.0 is supported on ESXi 5.0 Update 2. For more information, see vSphere ESX Agent Manager.

  • vSphere Guest SDK. The VMware vSphere Guest SDK 4.0 is supported on ESXi 5.0 Update 2. For more information, see the VMware vSphere Guest SDK Documentation.

  • VMware vSphere SDK for Perl. The SDK for Perl 5.0 is supported on vSphere 5.0 Update 2. For more information, see the vSphere SDK for Perl Documentation.

Open Source Components for VMware vSphere

The copyright statements and licenses applicable to the open source software components distributed in vSphere 5.0 Update 2 are available at http://www.vmware.com/download/vsphere/open_source.html, on the Open Source tab. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent generally available release of vSphere.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi500-Update02 contains the following individual bulletins:

ESXi500-201212201-UG: Updates the ESXi 5.0 esx-base vib
ESXi500-201212202-UG: Updates the ESXi 5.0 net-igb vib
ESXi500-201212203-UG: Updates the ESXi 5.0 tools-light vib
ESXi500-201212204-UG: Updates the ESXi 5.0 net-ixgbe vib
ESXi500-201212205-UG: Updates the ESXi 5.0 esx-tboot vib
ESXi500-201212206-UG: Updates the ESXi 5.0 scsi-lpfc820 vibs
ESXi500-201212207-UG: Updates the ESXi 5.0 net-bnx2x vib
ESXi500-201212208-UG: Updates the ESXi 5.0 net-e1000e vib
ESXi500-201212209-UG: Updates the ESXi 5.0 misc-drivers vib
ESXi500-201212210-UG: Updates the ESXi 5.0 net-tg3 vib
ESXi500-201212211-UG: Updates the ESXi 5.0 ipmi-ipmi-si-drv vib


Patch Release ESXi500-Update02 Security-only contains the following individual bulletins:

ESXi500-201212101-SG: Updates the ESXi 5.0 esx-base vib
ESXi500-201212102-SG: Updates the ESXi 5.0 tools-light vib

Patch Release ESXi500-Update02 contains the following image profiles:

ESXi-5.0.0-20121202001-standard
ESXi-5.0.0-20121202001-no-tools

Patch Release ESXi500-Update02 Security-only contains the following image profiles:

ESXi-5.0.0-20121201001s-standard
ESXi-5.0.0-20121201001s-no-tools


For information on patch and update classification, see KB 2014447.

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

For a list of resolved issues that might occur if you upgrade from vSphere 5.0 Update 2 to vSphere 5.1, see KB 2040662 .

CIM and API

  • vSphere Client might detect a non-existent Power Supply Sensor
    For some ESXi systems, vSphere client might display information about a non-existent power supply source.

    This release improves handling of sensor data from the IPMI sensor data repository (SDR) to resolve the issue.
  • ESXi 5.0.x System Event Log (SEL) is empty on certain servers
    The System Event Log in the vSphere Client might be empty if ESXi 5.0.x is run on certain physical servers.
    The host's IPMI logs (/var/log/ipmi/0/sel) might also be empty.
    An error message similar to the following might be written to /var/log/messages:

    Dec 8 10:36:09 esx-200 sfcb-vmware_raw[3965]: IpmiIfcSelReadAll: failed call to IpmiIfcSelReadEntry cc = 0xff


    This issue is resolved in this release..
  • SMBIOS UUID reported by ESXi 5.0 hosts might be different from the actual SMBIOS UUID
    If the SMBIOS version of the ESXi 5.0 system is of version 2.6 or later, the SMBIOS UUID reported by the ESXi 5.0 host might be different from the actual SMBIOS UUID. The byte order of the first 3 fields of the UUID is not correct.

    This issue is resolved in this release.
  • Cannot check the number of open file descriptors of sfcb processes
    This release adds log entries to check the maximum number of open file descriptors of sfcb processes.
    You can check the limits for open file descriptors in the cim logs by performing the following steps:
    1. Set the CIM log level to 6 by using the # esxcfg-advcfg -s 6 /UserVars/CIMLogLevel command
    2. Restart the sfcbd service by using the # /etc/init.d/sfcbd-watchdog restart command.
    3. Verify that log file in the folder /var/log/messages contains entries similar to the following for the maximum limits for open file descriptor:
      sfcb-HTTP-Daemon[30847]: --- Limit of maximum open file descriptors: soft Limit - 512 Hard Limit - 1024

Miscellaneous

  • vm-support utility cannot collect partition table listing of VMFS 5 volumes by using fdisk -lu
    The vm-support utility is enhanced to collect outputs for partedUtil getptbl and partedUtil getUsableSectors. This helps in getting partition information of VMFS 5 volumes.

  • vm-support script does not collect raw device mapping information of VMDK files
    When you use the vm-support utility, raw device mapping information of the virtual machine VMDK files is not collected.

    This issue is resolved in this release.

  • Visuals in third party applications are corrupted while using VMware SVGA 3D driver in 3D enabled mode
    While using VMware SVGA 3D driver if you set the 3D enable option, the text labels in group boxes are overwritten by rectangular frame on WPF applications. This issue was observed in Atlas Client running inside virtual machines hosted on ESXi.

  • This issue is resolved in this release.

  • IBM ULTRIUM-HH5 tape device does not support SCSI commands with ordered set attribute
    IBM ULTRIUM-HH5 tape device does not support SCSI commands with ordered set attribute and might fail with a SCSI device status Check Condition and sense key 0x05/0x49/0x00 INVALID MESSAGE ERROR
    when SCSI requests with ordered set attribute are sent to the device.

    This issue is resolved in this release.

  • VDDK API calls on Windows fail to access large virtual disks using UNC pathnames
    When you attempt to read a virtual disk whose size is greater than 2 GB by using VDDK APIs on Windows, the API calls might fail. The VDDK logs might contain error messages similar to the following:
    DISKLIB-LINK : "<UNC pathname>.vmdk" : failed to open (The file is too large).

    This issue is resolved in this release.
  • Running the commands esxcfg-info, esxcfg-resgrp -l, or vm-support results with error messages in log files
    When you execute the commands esxcfg-info, esxcfg-resgrp -l, or vm-support on an ESXi host by using ESXi Shell or SSH, error messages similar to the following might be logged in the syslog.log file:

    2012-08-01T09:40:12Z esxcfg-info: ResourceGroup: Skipping CPU times for : Vcpu Id 11777 Times due to error. max # of processors: 4 < 11777

    2012-07-03T00:57:28Z esxcfg-resgrp: ResourceGroup: Skipping CPU times for : Vcpu Id 55340 Times due to error. max # of processors: 1 < 55340

    The issue does not cause operation impact for running virtual machines.

    This issue is resolved in this release.
  • The timeout option does not work when you re-enable the ESXi Shell and SSH
    If you set a non-zero time out value for SSH & ESXi Shell, the SSH & ESXi Shell gets disabled after reaching the time out value. However, if you re-enable SSH or ESXi Shell without changing the timeout setting, the SSH & ESXi Shell does not timeout.

    This issue is resolved in this release.
  • ESXi host fails with a purple diagnostic screen when you try to plug in or unplug a keyboard or mouse through a USB port
    When you attempt to plug in or unplug a keyboard or a mouse through the USB port, the ESXi host fails with the following error message:
    PCPU## locked up. Failed to ack TLB invalidate.

    For more information, see KB 2000091.

    This issue is resolved in this release.
  • NTP synchronization might fail on Solaris 10 guest operating system
    This release adds support for Guest Timer Calibration for Solaris 10 guests and allows the guest operating system to measure the TSC correctly within the tolerance of NTP.

    This release resolves the NTP synchronization issue.
  • ESXi 5.0 host fails due to a world slot memory leak
    When the ESXi host attempts to create a world group heap without releasing the memory associated with world slot, the process fails due to a world slot memory leak.

    This issue is resolved by releasing the memory associated with world slot.

  • ESXi host fails while performing quiesced replication
    When a virtual machine is configured for replication and goes through a state transition, HBR (Host Based Replication) manager expects that all the virtual machine disks must have the same quiesced state. However, if the virtual machine has more than one disk, and if some of the disks have finished replication and some are still in the process of being replicated might not be true. In this case, HBR incorrectly asserts this condition and causes an ESX host failure.

    This issue is resolved in this release.

  • Misleading error messages in the hostd log when you assign VMware vSphere Hypervisor vRAM license to an ESXi host with pRAM greater than 32GB
    When you attempt to assign VMware vSphere Hypervisor Edition vRAM license key to an ESXi host having physical RAM size of more than 32GB, the ESXi host might log false error messages similar to the following to /var/log/vmware/hostd:
    2012-08-08T16:39:18.593Z [2AA78B90 error 'Default' opID=HB-host-84@121-9c61c8e-a8] Unable to parse MaxRam value:
    2012-08-08T16:39:18.594Z [2AA78B90 error 'Default' opID=HB-host-84@121-9c61c8e-a8] Unable to parse MaxRamPerCpu value:
    2012-08-08T16:39:18.594Z [2AA78B90 error 'Default' opID=HB-host-84@121-9c61c8e-a8] Unable to parse MinRamPerCpu value:
    2012-08-08T16:39:18.594Z [2AA78B90 error 'Default' opID=HB-host-84@121-9c61c8e-a8] Unable to parse vram value:


    This issue is resolved in this release.

  • The CreateTemporaryFile InGuest() function might result in a GuestPermissionDeniedFault exception
    When you reconfigure a powered on virtual machine to add a SCSI controller and attempt to create a temporary file in the guest operating system, the operation might fail with a GuestPermissionDeniedFault exception.

    This issue is resolved in this release.

  • vSphere network core dump does not collect complete data
    vSphere network core dump does not collect complete data if the disk dump fails to collect some data due to insufficient dump slot size.

    This issue is resolved in this release. Any failures due to disk dump slot size will no longer affect network core dump.

  • Virtual machines running Windows guest operating system on an ESXi host with vShield Endpoint might fail with a blue screen
    An erroneous API call in vShield Endpoint driver vsepflt.sys might cause virtual machines running Windows guest operating system to fail with a blue diagnostic screen.
      
    This issue is resolved in this release.

Networking

  • ESXi host with approximately 1024 dvPorts stops responding
    When an ESXi host uses more than approximately 1024 dvPorts, it stops responding.
    The following warning messages are seen in the vmkwarning log file:
    WARNING: Heap: 2639: Heap dvsLargeHeap (65582032/67108864): Maximum allowed growth (1527808) too small for size (3719168)
    WARNING: Heap: 2900: Heap_Align(dvsLargeHeap, 3715488/3715488 bytes, 8 align) failed. caller: 0x418025d29338


    This issue is resolved in this release by increasing the default maximum size of the dvsLargeHeap and by adding an advanced configuration option (DVSLargeHeapMaxSize) to allow administrators to further increase the maximum size of the dvsLargeHeap.

  • ESXi host might stop responding with a purple screen if MTU is changed multiple times
    Changing MTU multiple times might lead to netGPHeap depletion and memory leak, causing ESXi host to fail with a purple screen due to an issue with the Intel ixgbe async driver.

    The vmkernel.log file might contain heap exhaustion messages similar to the following:
    WARNING: Heap: 2525: Heap netGPHeap already at its maximum size. Cannot expand
    WARNING: Heap: 2900: Heap_Align(netGPHeap, 8192/8192 bytes, 64 align) failed. caller: 0x41802c10a03f
    WARNING: NetPort: 1244: failed to enable port 0x2000002: Out of memory
    NetPort: 1426: disabled port 0x2000002
    Uplink: 5240: vmnic2: Failed to enable the uplink port 0x2000002: Out of memory
    <3>ixgbe: vmnic2: ixgbe_alloc_tx_queue: allocated tx queue 1
    <3>ixgbe: vmnic2: ixgbe_alloc_tx_queue: allocated tx queue 2
    <3>ixgbe: vmnic2: ixgbe_alloc_tx_queue: allocated tx queue 3 <6>ixgbe 0000:03:00.0: vmnic2: changing MTU from 9000 to 1500


    This issue is resolved in this release.
  • After DVMirror sessions are reconfigured promiscuous mode might not work as expected
    On a Dvportgroup, a promiscuous port might not work in promiscuous mode if the DVMirror sessions are reconfigured.

    This issue is resolved in this release.
  • ESXi host might fail with a purple diagnostic screen at Kseg_ReleaseVA() or Kseg_ReleasePtr() with a NULL "va" or pointer
    An ESXi host might fail with a purple diagnostic screen at Kseg_ReleaseVA() or Kseg_ReleasePtr() with a NULL "va" or pointer due to an unclean exception handling code path in the ESXi network stack.

    This issue is resolved with this release.
  • Remote Desktop IP virtualization might not work on a Windows Server 2008 R2 running on vSphere 4.0 Update 1
    IP virtualization, which allows you to allocate unique IP addresses to RDP sessions, might not work on a Windows Server 2008 R2 64-bit vritual machine running on vSphere 4.0 Update 1. However, IP virtualization works when you configure Remote Desktop Services on a physical Windows Server 2008 R2 machine, or when you run a Windows Server 2008 R2 virtual machine on XenServer 5.5 Update 2 Dell OEM Edition.
    This issue might occur if you install VMware Tools after installing Remote Desktop services.

    This issue is resolved in this release.
  • Large number of UDP packets are dropped when you use the VMXNET3 adapter
    Large number of UDP packets are dropped when you use the VMXNET3 adapter with a Linux guest operating system installed on an ESXi 5.0 host.

    This issue is resolved in this release.
  • RHEL 6 virtual machine configured with RDM devices through PVSCSI adapter might encounter I/O failure
    RHEL 6 virtual machine configured with Raw Device Mapping (RDM) devices through Paravirtualized SCSI (PVSCSI) adapter might encounter I/O failure after the bus reset to RDM devices. During reset, the PVSCSI adapter returns DID_RESET causing the Linux SCSI layer to retry the command.

    This issue is resolved in this release.
  • Network bandwidth is not shared fairly among virtual machines of a network resource pool
    Network bandwidth is not allocated fairly to all the virtual machines of a network resource pool due to limitations in the current implementation.

    This issue is resolved in this release. The improved algorithm optimally allocates bandwidth to all virtual machines sharing the same network resource pool.
  • Netdump fails during core dump collection after a purple-diagnostic screen appears
    Netdump fails during core dump collection after a purple-diagnostic screen appears if the vSwitch security option Mac Address Changes is set as Reject.

    This issue is resolved with this release.
  • When you disable coalescing on ESXi, the host fails with a purple screen
    In ESXi, when VMXNET3 is used as vNIC in some virtual machines and you turn off packet coalescing, the ESXi host might fail with a purple screen as the virtual machine is booting up.

    This issue is resolved in this release by correcting the coalescing checking and assertion logic.
  • Network connectivity to a virtual machine configured to use IPv6 might fail after installing VMware Tools
    Network connectivity to guest operating systems that use kernel versions 2.6.34 or higher and are configured to use IPv6 might not work after you install VMware Tools.

    This issue is resolved in this release.
  • vSphere Client might not display IPv6 addresses on some Linux guest operating systems
    On Linux guest operating systems that are not configured with IPv4 addresses, the IPv6 addresses might not be displayed in the vSphere Client or when you use the vmware-vim-cmd command.

    This issue is resolved in this release.
  • IBM server fails with purple diagnostic screen while trying to inject slow path packets
    If the metadata associated with slowpath packet is copied without checking whether enough data is mapped, then the metadata moves into an unmapped area resulting in a page fault.

    This issue is resolved in this release.
  • The iSCSI initiator login timeout is not inherited in ESXi 5.0 Update 1
    The iSCSI initiator Login timeout value set has to be inherited for the discovery phase. This does not happen in ESXi 5.0 Update 1.

    This issue is resolved in this release.
  • Long running vMotion operations might result in unicast flooding
    When using the multiple-NIC vMotion feature with vSphere 5, if vMotion operations continue for a long time, unicast flooding is observed on all interfaces of the physical switch. If the vMotion takes longer than the MAC address table ageing-time, the source and destination host start receiving high amounts of network traffic.

    This issue is resolved in this release.

  • If VMXNET 3 is used as virtual network adapter the Network Address Translation (NAT) on Windows Server 2008 R2 might not work
    If you attempt to access the internet through NAT server on Windows Server 2008 R2 operating system with two virtual VMXNET 3 network adapters on ESXi, the NAT server might not work.

    This issue is resolved in this release.
  • Physical NICs set to Auto-negotiate cannot be changed to Fixed by using Host Profiles if the same speed and duplex settings are present
    Host Profile compliance check is performed against the speed and duplex of a physical NIC. If the speed and duplex of a physical NIC of a ESXi host matches that of the Host Profile, the ESXi host shows as compliant, even if the physical NIC is set to Auto-negotiate and the Host Profile is set to Fixed. Also, physical NICs set to Auto-negotiate cannot be changed to Fixed by using Host Profiles if the speed and duplex settings of the ESXi host and host profile is the same.

    This issue is resolved in this release.
  • Updates the tg3 driver to version 3.123b.v50.1
    The tg3 inbox driver version shipped with the ESXi 5.0 Update 2 is 3.123b.v50.1.

Security

  • Updates libPNG library
    The libPNG library has been updated to libpng-1.2.49. Libpng-1.2.49 contains a fix for a security vulnerability. No VMware product is affected by this issue.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2011-3048 to this issue.

Server Configuration

  • Application of host profile might result in an unnecessary warning message
    After applying a host profile that has the ESXi firewall on and the Fault Tolerance rule blocked, a warning message similar to the following might be unnecessarily displayed:
    Ruleset faultTolerance doesn't match the specification

    This issue is resolved in this release.
  • Host profile creation fails with the error: Unrecognized Fixed PSP configured path
    Host Profile creation fails and the ESXi host logs messages similar to the following to /var/log/vmware/hostd:
    Error: 'nmp.nmpProfile.FixedPspPolicy: Unrecognized Fixed PSP configured path iqn.2000-04.com.qlogic:qle4062c.yk10ny9cl5yk.1-00c0dd1c341f,iqn.1992-04.com.emc:cx.ckm00094800328.a2,t,1-naa.6006016090d0260071c460297bc1df11'
    This issue occurs if you have configured a preferred path for iSCSI LUNs with fixed path policy.

    This issue is resolved in this release.
  • ESXi 5.0 host experiences a purple diagnostic screen with the errors on some HP servers with PCC support
    Some HP servers experience a situation where the PCC (Processor Clocking Control or Collaborative Power Control) communication between the VMware ESXi kernel (VMkernel) and the server BIOS does not function correctly. As a result, one or more PCPUs might remain in SMM (System Management Mode) for many seconds. When the VMkernel detects that a PCPU is not available for an extended period of time, a purple diagnostic screen appears, displaying messages similar to the following:

    PCPU 39 locked up. Failed to ack TLB invalidate (total of 1 locked up, PCPU(s): 39).
    0x41228efc7b88:[0x41800646cd62]Panic@vmkernel#nover+0xa9 stack: 0x41228efe5000
    0x41228efc7cb8:[0x4180064989af]TLBDoInvalidate@vmkernel#nover+0x45a stack: 0x41228efc7ce8


    @BlueScreen: PCPU 0: no heartbeat, IPIs received (0/1).
    ...
    0x4122c27c7a68:[0x41800966cd62]Panic@vmkernel#nover+0xa9 stack: 0x4122c27c7a98
    0x4122c27c7ad8:[0x4180098d80ec]Heartbeat_DetectCPULockups@vmkernel#nover+0x2d3 stack: 0x0
    ...
    NMI: 1943: NMI IPI received. Was eip(base):ebp:cs [0x7eb2e(0x418009600000):0x4122c2307688:0x4010](Src 0x1, CPU140)
    Heartbeat: 618: PCPU 140 didn't have a heartbeat for 8 seconds. *may* be locked up


    This release disables PCC to resolve the issue.
  • SMP virtual machine fails with monitor panic message while running kexec
    When a Linux kernel crashes, the Linux kexec feature might be used to enable booting into a special kdump kernel and gathering crash dump files. An SMP Linux guest configured with kexec might cause the virtual machine to fail with a monitor panic error during this reboot. Error messages such as the following might be logged:
    vcpu-0| CPU reset: soft (mode 2)
    vcpu-0| MONITOR PANIC: vcpu-0:VMM fault 14: src=MONITOR rip=0xfffffffffc28c30d regs=0xfffffffffc008b50


    This issue is resolved in this release.
  • The ESXi host logs an incorrect C1E state
    The vmkernel.log and the dmesg command might result in a message similar to C1E enabled by the BIOS. The message might appear even when C1E has been disabled by the BIOS, and it might not appear even when C1E has been enabled by the BIOS.

    This issue is resolved in this release.
  • Host profile might fail to apply MTU value on the vSwtiches of the destination host
    When you apply a host profile that modifies only the MTU value for a standard vSwitch, the new MTU configuration is not applied on vSwitches of the new destination host.

    This issue is resolved in this release.

Storage

  • ScsiDeviceIO related error message might be logged during device discovery
    During device discovery, if an optional SCSI command fails with certain condition, ESXi 5.0 host might log failed optional SCSI commands into vmkernel log.

    This issue is resolved in this release.

  • VMware software FCoE (Fibre Channel over Ethernet) adapter might stop functioning when ESXi accepts VLAN ID 4095 that is returned by a FCoE switch
    FCoE switch might return VLAN ID 4095 in response to the FIP VLAN discovery request coming from an ESXi Server. However, VLAN ID 4095 is a reserved ID. When ESXi accepts the VLAN ID 4095 from a FCoE switch, ESXi stops FIP VLAN discovery and uses this VLAN ID, causing FCoE discovery to fail.

    This issue is resolved in this release.

  • A storage fault might result in inactive datastores residing on the LUNs of the storage controllers and an inaccessible virtual machine in vSphere Client
    The datastores residing on the LUNs of the storage controllers might be inactive and a virtual machine might be inaccessible in vSphere client when you encounter a storage fault. The datastores remain inactive until a manual rescan is performed. This happens when the management agent, hostd, fails to handle the esx.clear.storage.redundancy.restored vob message properly.

    This issue is resolved in this release.

  • When the quiesced snapshot operation fails the redo logs are not consolidated
    When you attempt to take a quiesced snapshot of a virtual machine, if the snapshot operation fails towards the end of its completion, the redo logs created as part of the snapshot  are not consolidated. The redo logs might consume a lot of datastore space.

    This issue is resolved in this release. If the quiesced snapshot operation fails, the redo log files are consolidated.

  • The command esxcfg-scsidev -a shows physical link state with Emulex Fibre Channel over Ethernet Converged Network Adapters
    When you run the esxcfg-scsidev -a command, the physical link state with Emulex Fibre Channel over Ethernet (FCoE) Converged Network Adapters (CNA) is displayed.

    This issue is resolved in this release. The esxcfg-scsidev -a command now displays the virtual link state.

  • ESXi host fails to remove devices from DS8300 storage array
    An ESXi host is unaware of unmapped luns from DS8300 storage array, fails to mark the paths as dead, and does not remove these unmapped LUNs. Commands to unmapped LUNs on DS8300 fail with sense key 0x0b message and the ESXi host is unable to record a Permanent Device Loss (PDL) case. PDL is now discovered based on ASC sense data and not based on sense key.

    This issue is resolved in this release.

  • Deleting a virtual machine results in removal of unassociated virtual disks that are part of a virtual machine snapshot
    When a virtual machine with snapshots is deleted, independent or non-independent virtual disks that were detached from the virtual machine but are part of an existing snapshot might also be deleted.

    This issue is resolved in this release.
  • Some NFS datastores might not be preserved after upgrading from ESX 4.x to ESX 5.x
    If the NFS datastore name contains spaces, such NFS datastores might not be preserved after upgrading from ESX 4.x to ESX 5.x. After upgrading to ESX 5.x, all NFS datastores with spaces in their names are deleted. Other NFS datastores are not affected.

    This issue is resolved in this release.
  • Warning messages are logged during heartbeat reclaim operation
    VMFS might issue I/Os to a volume when a VMFS heartbeat reclaim operation is in progress or a virtual reset operation is performed on an underlying device. As a result, alert and warning messages similar to the following are logged:

    ALERT: ScsiDeviceIO: SCSIAsyncDeviceCommand:3082: Failed command 0x2a to quiesced partition naa.xxxxxxxxxxx


    WARNING: ScsiDeviceIO: 2360: Failing WRITE command (requiredDataLen=512 bytes) to write-quiesced partition naa.xxxxxxxxxxx

    Where naa.xxxxxxxxxx is the NaaID of the volume

    In this release, the alert messages are removed and warnings are changed to log messages.
  • ESXi host stops responding when VMW_SATP_LSI module runs out of heap memory
    This issue occurs on servers that have access to LUNs that are claimed by the VMW_SATP_LSI module. A memory leak that exists in VMW_SATP_LSI module forces the module to run out of memory. Error messages similar to the following are logged to vmkernel.log file:

    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: Heap: 2218: Heap VMW_SATP_LSI already at its maximumSize. Cannot expand.
    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: Heap: 2481: Heap_Align(VMW_SATP_LSI, 316/316 bytes, 8 align) failed. caller: 0x41800a9e91e5
    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: VMW_SATP_LSI: satp_lsi_IsInDualActiveMode: Out of memory.


    The memory leak in the VMW_SATP_LSI module has been resolved in this release.
  • Attempts to create a diagnostic partition on a blank disk or GPT partitioned disk might fail
    Attempts to create a diagnostic partition using the vSphere Client might fail with the following error message:
    Partition format unknown is not supported.
    This issue occurs if you try to create a diagnostic partition on a blank disk (a disk with no partition table) or on a GPT partitioned disk with free space available at the end.

    This issue is resolved in this release.
  • Attempts to enable flow control on the Intel 82599EB Gigabit Ethernet Controller fail
    When you try to enable flow control on the Intel 82599EB Gigabit Ethernet controller, the ixgbe driver incorrectly sets the flow control mode to priority-based flow control in which the flow control is always disabled. As a result, the error message Cannot set device pause parameters: Invalid argument appears when you try to enable flow control.

    This issue is resolved in this release.
  • ESXi hosts might fail with a purple diagnostic screen in libata driver
  • A race condition in libata driver can cause it to fail in function ata_hsm_move with purple diagnostic screen with stack trace similar to following.
    BUG: failure at vmkdrivers/src_9/drivers/ata/libata-core.c:5833/ata_hsm_move()! (inside vmklinux) Panic_vPanic@vmkernel#nover+0x13 stack: 0x3000000010, 0x412209a87e00 vmk_PanicWithModuleID@vmkernel#nover+0x9d stack: 0x412209a87e20,0x4 ata_hsm_move@com.vmware.libata#9.2.0.0+0xa0 stack: 0x0, 0x410017c03e ata_pio_task@com.vmware.libata#9.2.0.0+0xa5 stack: 0x0, 0x0, 0x53fec vmklnx_workqueue_callout@com.vmware.driverAPI#9.2+0x11a stack: 0x0, helpFunc@vmkernel#nover+0x568 stack: 0x0, 0x0, 0x0, 0x0, 0x0

    This issue is resolved in this release.
  • VMFS journal replay might fail for VMFS-3 volumes on an ESXi 5.0 host
    VMFS journal replay might fail with the following message on an ESXi 5.0 host:

    J3: 3167: Failed to replay extended transaction on 4a1aa282-32d04c23-03a2-001517ab207b possibly pending online upgrade
    WARNING: HBX: 4336: Replay of journal on vol 'san1_vmfs3' failed: Bad parameter

    This issue is resolved in this release.
  • Adding a new hard disk to a virtual machine that resides on a Storage DRS enabled datastore cluster might result in Insufficient Disk Space error
    When you add a virtual disk to a virtual machine that resides on a Storage DRS enabled datastore, if the size of the virtual disk is greater than the free space available in the datastore, SDRS might migrate another virtual machine out of the datastore to allow sufficient free space for adding the virtual disk. The Storage vMotion operation completes but the subsequent addition of virtual disk to the virtual machine might fail and an error message similar to the following might be displayed:
    Insufficient Disk Space

    This issue is resolved in this release.
  • When you attempt to load the Content-Based Read Cache (CBRC) module in ESX with limited amount of memory the host fails with a purple diagnostic screen
    If you attempt to load the CBRC module on an ESXi host that has limited amount of memory, the ESXi host might fail with a purple diagnostic screen. This issue occurs when two memory functions attempt to change the value of a memory counter at the same time instance.

    This issue is resolved in this release.
  • Accessing corrupted metadata on VMFS-3 volume might result in ESXi host failure
    If a file metadata is corrupted on a VMFS-3 Volume, ESXi host might fail with a purple diagnostic screen while trying to access such a file. VMFS file corruption is extremely rare but can be caused by external storage issues.
    This issue is resolved in this release.
  • Unable to delete files from the VMFS directory after one or more files are moved to it
    After moving (using the command mv or copy and paste with the vSphere Client Browser) one or more files into a directory, an attempt to delete the directory or any of the files in directory might fail. In such a case, the vmkernel.log file contains entries similar to the following :
    2012-06-25T21:03:29.940Z cpu4:373534)WARNING: Fil3: 13291: newLength 85120 but zla 2
    2012-06-25T21:03:29.940Z cpu4:373534)Fil3: 7752: Corrupt file length 85120, on FD <281, 106>, not truncating


    This issue is resolved in this release.

  • ESXi hostd agent might consume very high CPU resulting in performance degradation
    When vCloud Director fetches the screen shot of virtual machine desktop from the ESXi host, hostd agent might enter into an infinite loop resulting in 100% CPU usage and the CPU usage might not reduce until you restart hostd.

    This issue is resolved in this release.

Upgrade and Installation

  • Cisco UCS blade (booting from FCoE/iSCSI/FC SAN) might lose configuration after a fresh installation or upgrade to ESXi 5.0 Update 1
    This issue might occur after an upgrade to ESXi 5.0 Update 1, where the initial reboot succeeds; however, on subsequent reboots, the UCS blade loses the configuration and reverts back to its previous state.
    After a fresh installation of ESXi 5.0 Update 1, configuration changes might be lost on reboot.
    You might face a similar issue while performing any of the following actions:
    • Installing third party drivers
    • Installing security patches
    • Shutting down ESXi 5.0 Update 1 for maintenance
    • Rebooting ESXi 5.0 Update 1 for maintenance
    • Shutting down ESXi 5.0 Update 1 for any unplanned power shortage issue
    This issue is resolved in this release.
  • The acceptance level of stateless ESXi host might not be consistent with the acceptance level of the image profile it boots from
    A stateless ESXi host might have the PartnerSupported acceptance level regardless the acceptance level of the image profile that it boots from.

    This issue is resolved in this release.

  • DNS might not get configured on hosts installed using scripts that specify using DHCP
    If you install ESXi on a host host using a script that specifies the host to obtain the network settings from DHCP, after booting, the host is not configured with a DNS. The DNS setting is set to manual with no address specified.

    This issue is resolved in this release.

  • Initial boot up of an ESXi host might cause local disks attached to the host to be reformatted as VMFS
    When an ESXi host boots up for the first time, auto-partitioning wipes out and re-formats all local disks attached to the host as VMFS.

    This release resolves the issue by disabling auto-partitioning by setting the default option as False. A new boot option AutoPartition has been introduced that enables auto-partioning, when set to True.

  • Reinstallation of ESXi 5.0 does not remove the Datastore label of the local VMFS of an earlier installation
    Reinstallation of ESXi 5.0 with an existing local VMFS volume retains the Datastore label even after the user chooses the overwrite datastore option to overwrite the VMFS volume.

    This issue is resolved in this release.
  • RDMs attached to Solaris virtual machines might be overwritten when ESXi hosts are upgraded using Update Manager
    When upgrading to ESXi 5.0.x using Update Manager, an error in comprehending the disk partition type might cause the RDMs attached to Solaris virtual machines to be overwritten. This might result in loss of data on the RDMs.

    This issue is resolved in this release.
  • Upgrading the ESX host changes the firewall rules from disabled to enabled
    When you upgrade the ESX host from 4.1 to 5.0 Update1, firewall rules change from being disabled to enabled. This happens after you create a host profile, modify it, and then apply the same profile to the host.
    This issue is resolved in this release.
  • After upgrading from ESXi 4.x to ESXi 5.0.x, the upgraded hosts might fail to reconnect to vCenter Server
    If you modify the /etc/rc.local file in ESXi 4.1.x hosts, and upgrade these ESXi 4.x hosts to ESXi 5.0.x, the upgraded hosts might not be able to reconnect to vCenter Server due to licensing related issues. Error messages similar to the following might be displayed:
    The Unlicensed license for Host xxx does not include VMware DRS. Upgrade the license


    The vpxd log files might contain entries similar to the following:

    012-04-26T11:58:37.069-04:00 [05420 warning 'Default' opID=C8581389-00000EF2] [LicMgr] Trying to remove licenses. Host was never registered with the license manager.
    012-04-26T11:58:37.069-04:00 [05420 trivia 'QueryServiceProvider' opID=C8581389-00000EF2] Clearing uncommitted generations on this thread for LicenseManager
    012-04-26T11:58:37.069-04:00 [05420 trivia 'VpxProfiler' opID=C8581389-00000EF2] Ctr: CheckVCLicense/TotalTime = 0 ms
    012-04-26T11:58:37.069-04:00 [05420 error 'Default' opID=C8581389-00000EF2] [LicMgr] feature drs not included in license assigned to host-129707.
    012-04-26T11:58:37.069-04:00 [05420 trivia 'VpxProfiler' opID=C8581389-00000EF2] Ctr: CheckingHostFeatures/TotalTime = 0 ms


    This issue is resolved in this release.
  • Disallow upgrades from ESX 4.1 with extents on the local datastore to ESXi 5.0
    Upgrading from ESX 4.1 with extents on local datastore to ESXi 5.x is not a supported use case. However, the current upgrade process allows this upgrade behavior, and drops the extents on local datastore without displaying any error or warning message.

    This issue is resolved in this release by adding a check on pre-check script to detect such a situation. If detected, a message is displayed to the user, to terminate the upgrade or migrate.
  • When using AutoDeploy, the gPXE process might time out
    When you use AutoDeploy to boot ESXi 5.0 hosts, the gPXE process might time out while attempting to get a IP address from DHCP servers, and the AutoDeploy boot process stops abruptly.

    This issue is resolved in this release. The total timeout period is increased from about 30 seconds to 60 seconds.
  • Warning messages are displayed while installing Solaris 11 GA on ESX host
    When you install Solaris 11 GA on ESX host from the virtual machine console and configure it with a LSI logic parallel controller, the ESX host always displays the following warning messages:
    unknown ioc_status = 4 and
    incomplete write – giving up


    The warning messages do not affect or block the guest operating system installation. You can still use Solaris 11 GA. However, this issue is also observed when copying files in bulk.

    This issue is resolved in this release.
  • ESXi 5.x scripted installation incorrectly warns that USB or SD media does not support VMFS, despite the --novmfsondisk parameter in kickstart file
    If you use scripted installation to install ESXi 5.0 on a disk that is identified as USB or SD media, the installer might display the warning message:
    The disk (<disk-id>) specified in install does not support VMFS.
    This message is displayed even if you have included the --novmfsondisk parameter for the install command in the kickstart file.

    This issue is resolved in this release.
  • Issues using Microsoft Windows Deployment Services (WDS) to PXE boot virtual machines that use the VMXNet3 network adapter
    Attempts to PXE boot virtual machines that use the VMXNET3 network adapter, by using the Microsoft Windows Deployment Services (WDS) might fail with messages similar to the following:

    Windows failed to start. A recent hardware or software change might be the cause. To fix the problem:
    1. insert you Windows installation disc and restart your computer.
    2. Choose your language setting, and then click "Next."
    3. Click "Repair your computer."
    If you do not have the disc, contact your system administrator or computer manufacturer for assistance.

    Status: 0xc0000001

    Info: The boot selection failed because a required device is inaccessible
    .

    This issue is resolved this release.

Virtual Machine Management

  • Virtual machines might fail to power on if the virtual device BIOS filename is an empty string
    When you edit the virtual machine config options and set the virtual device BIOS filename to an empty string through the API, the virtual machine then fails to power on. For example,lsibios.filename="".

    This issue is resolved in this release.

  • Virtual machine fails with monitor panic message if paging is disabled
    An error messages similar to the following is written to vmware.log:

    vcpu-0| MONITOR PANIC: vcpu-1:VMM64 fault 14: src=MONITOR
    vcpu-0| rip=0xfffffffffc262277 regs=0xfffffffffc008c50


    This issue is resolved in this release.
  • Cannot create a quiesced snapshot after an independent disk is deleted from a virtual machine
    If an independent disk is deleted from a virtual machine, attempts to create a quiesced snapshot of a virtual machine might fail as the disk mode data for a given SCSI node might be stale. An error message similar to the following might be displayed:
    Status: An error occurred while quiescing the virtual machine. See the virtual machine's event log for details.

    Log files might contain entries similar to the following:
    HotAdd: Adding scsi-hardDisk with mode 'independent-persistent' to scsi0:1

    ToolsBackup: changing quiesce state: STARTED -> DONE
    SnapshotVMXTakeSnapshotComplete done with snapshot 'back': 0
    SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine. (40).


    This issue is resolved in this release.

vMotion and Storage vMotion

  • Creation of quiesced snapshot of Microsoft Windows Server 2008 R2 might fail on an ESXi 5.0 host
    When creating a quiesced snapshot of a Microsoft Windows Server 2008 R2 virtual machine, if you specify the working directory, the snapshot operation might fail with the following error message:
    Snapshot guest failed: Failed to quiesce the virtual machine.

    This issue is resolved in this release.

  • When you live migrate Windows 2008 virtual machines from ESX 4.0 to ESXi 5.0 and then perform a storage vMotion quiesced snapshots fail
    A storage vMotion operation on ESXi 5.0 by default sets disk.enableUUID to true for a Windows 2008 virtual machine, thus enabling application quiescing. Subsequent quiesced snapshot operation fail till the virtual machine undergoes a power cycle.
    This issue is resolved in this release.
  • ESXi host fails with a purple screen during a storage vMotion operation
    After completing a storage VMotion operation, ESXi disconnects the mirror device it created for the operation. However, under certain conditions, it might reference an uninitialized pointer which might result in ESXi host failing with a purple screen.

    This issue is resolved in this release.

VMware HA and Fault Tolerance

  • Secondary FT virtual machine running on ESXi host might fail
    On an ESXi host, a secondary Fault Tolerance virtual machine with VMXNET 3 adapter might fail. Error messages similar to the following are written to vmware.log:

    Dec 15 16:11:25.691: vmx| GuestRpcSendTimedOut: message to toolbox timed out.
    Dec 15 16:11:25.691: vmx| Vix: [115530 guestCommands.c:2468]: Error VIX_E_TOOLS_NOT_RUNNING in VMAutomationTranslateGuestRpcError(): VMware Tools are not running in the guest
    Dec 15 16:11:30.287: vcpu-0| StateLogger::Commiting suicide: Statelogger divergence
    Dec 15 16:11:31.295: vmx| VTHREAD watched thread 4 "vcpu-0" died


    This issue does not occur on a virtual machine installed with E1000 adapter.

    This issue is resolved in this release.

VMware Tools

  • The Driver Verifier on Windows 2008 virtual machine might fail to respond
    When the Driver Verifier option is enabled on Windows 2008 virtual machine, the VMCI socket lock does not function properly.

    This issue is resolved in this release.

  • VMware Tools configuration utility might fail to execute scripts successfully on Windows 8 or Windows 2012 virtual machines
    On Windows 8 or Windows Server 2012 virtual machines, the VMware Tools configuration utility VMwareToolboxCmd.exe might fail to run scripts, displays an error message similar to the following:
    VMwareToolboxCmd.exe: Administrator permissions are needed to perform script operations. Use an administrator command prompt to complete these tasks.

    This issue is resolved in this release.
  • Virtual machines using VESA drivers on Windows 2008 R2 operating system experience performance issues
    VMware Tools might not install any graphics driver on a Windows 2008 R2 operating system. As a result, the virtual machine uses the default VESA driver. This causes performance problems in some hardware devices of the guest operating system.

    This issue is resolved in this release. VMware Tools installs the WDDM driver by default, to improve performance.
  • VMware tools installed from package manager modifies the system file permission incorrectly
    When you install VMware Tools using Redhat Package Manager on Centos 6 operating system, the system file permissions are modified incorrectly.

    This issue is resolved in this release.
  • VMware Tools on a Linux virtual machine might fail intermittently
    VMware Tools includes a shared library file named libdnet. When another software such as Dell OpenManage software is installed, another shared library with the same name is created on the file system. When VMware Tools loads, it loads Dell OpenManage software's libdnet.so.1 library instead of VMware Tool's libdnet.so. The guest OS information might not be displayed in the Summary tab of vSphere Client and the NIC information might also not be displayed.

    This issue is resolved in this release.
  • VMware Tools service might fail with VMware Tools unrecoverable error
    The VMware Tools service (vmtoolsd.exe) might fail with a VMware Tools unrecoverable error stack due to a NULL pointer error.

    This issue is resolved in this release.
  • VMware Tools upgrade does not replace VMCI driver required for Remote Desktop IP virtualization
    When you upgrade VMware Tools, IP virtualization fails. This happens because the ESXi host fails to check for the new VMCI driver version and is unable install the vsock DLL files.

    This issue is resolved in this release.
  • VMware Tools might fail when more than 16 VLAN interfaces are created
    On ESXi 5.0 hosts, if more than 16 VLAN interfaces are created for a virtual machine, VMware tools might stop responding.

    This issue is resolved in this release.
  • Installation of VMware Tools on a Solaris virtual machine might automatically change the MTU size to 9000
    When you install or upgrade VMware Tools available with ESXi 5.0 and ESXi 5.0 Update 1 on Solaris virtual machines, the MTU size might automatically change to 9000. Even if the MTU size is changed, it reverts to 9000 when you restart the guest operating system.

    This issue is resolved in this release.
  • Installation or upgrade of VMware Tools in Windows Guests completes with an error message
    After you complete installing or upgrading VMware Tools on a Windows guest virtual machine, an error message similar to either of the following might be displayed:
    There is no disk in the drive
    or
    No disk: exception Processing Message c0000013

    This issue is resolved in this release.
  • Virtual machine might fail during a backup involving quiesced snapshot
    The virtual machine fails during a backup involving quiesced snapshot. This happens because the synchronous manifest file copy operation fails due to an inconsistency in callback process and forces the system to enter an invalid state.

    This issue is resolved in this release.
  • Using the no reboot option while installing VMware Tools does not prevent virtual machines from rebooting on Windows XP & Windows 2003 guests
    The commands setup.exe /S /v"/qn REBOOT=R" or setup.exe /S /v"/qn REBOOTPROMPT=S" suppresses the reboot of virtual machines after installation of VMware Tools. On ESXi 5.0 hosts, installation of VMware Tools installs Visual C++ runtime, even if it is already installed. This leads to a repair of the Visual C++ runtime and necessitates a reboot of the virtual machine if the command to suppress reboot is used. This is specific to Windows XP & Windows 2003 or earlier versions.

    This issue is resolved in this release.
  • File permissions of /etc/fstab might change after VMware tools is installed
    When VMware Tools is installed on virtual machine such as SUSE Linux Enterprise Server 11 SP1, the file permission attribute of /etc/fstab might change from 644 to 600.

    This issue is resolved in this release.
  • E1000 network interface card loses static IP settings after a VMware Tools upgrade
    On Windows 2003 operating system, when you attempt VMware Tools upgrade the static IP settings of E1000 are lost and the guest network adapter is set to DHCP. This happens after you uninstall VMware Tools by using Add or Remove Programs.

    This issue is resolved in this release.
  • Installing OSPs for VMware Tools on SUSE Linux Enterprise Server 11 SP 3 and SUSE Linux Enterprise Server 11 SP 4 fails
    Attempt to install OSPs for VMware Tools package on SUSE Linux Enterprise Server 11 SP 3 and SUSE Linux Enterprise Server 11 SP 4 fails, causing the following error message to appear:
    ERROR: Dependency resolution failed:
    Unresolved dependencies:
    There are no installable providers of vmware-tools-esx
    Marking this resolution attempt as invalid.

    Installation fails because of an unresolved dependency in vmware-tools-esx meta-package, causing the package to seek and not find the vmmouse_drv.so binary.

    This issue is resolved in this release.

  • Windows guest operating system running on an ESXi 5.0 host with vShield Endpoint and VMware Tools might display sharing violation errors
    In an environment with vShield Endpoint component bundled with VMware Tools, Windows guest operating system running on an ESXi 5.0 host might display sharing violation errors while accessing network files. An error message similar to the following might appear when you attempt to open a network file:
    Error opening the document. This file is already open or is used by another application.

    This issue is resolved in this release.
     

Known Issues

The following known issues have been discovered through rigorous testing and will help you understand some behavior you might encounter in this release. This list of issues pertains to this release of ESXi 5.0 Update 1 and ESXi 5.0 only. Some known issues from previous releases might also apply to this release. If you encounter an issue that is not listed in this known issues list, you can review the known issues from previous releases, search the VMware Knowledge Base, or let us know by providing feedback.

Known Issues List

Read through all of the Known Issues to find items that apply to you. Known issues not previously documented are marked with the * symbol. The issues are grouped as follows.

Installation
  • Extraneous networking-related warning message is displayed after ESXi 5.0 Update 2 is installed
    After you install ESXi 5.0 Update 2, the ESXi Direct Console User Interface (DCUI) displays a warning message similar to the following:

    Warning: DHCP look up failed. You may be unable to access this system until you customize its network configuration


    However, the host acquires DHCP IP and can ping other hosts.

    Workaround: This error message is benign and can be ignored. The error message disappears if you press Enter on the keyboard.

  • In scripted installations, the ESXi installer does not apply the --fstype< option for the part command in the ks file
    The --fstype option for the part command is deprecated in ESXi 5.0. The installer accepts the --fstype option without displaying an error message, but does not create the partition type specified by the --fstype option. By default, the installer always creates VMFS5 partitions in ESXi 5.0. You cannot specify a different partition type with the --fstype option for the part command.

  • Rules information missing when running Image Builder cmdlet to display modified image profile in PowerShell 1.0
    After you install vSphere PowerCLI on Microsoft PowerShell 1.0 and add an OEM software package to an image profile, when you list the image profile, information about the Rules property is missing.

    Workaround: Access the rules information by viewing Rules property of the image profile object.

  • Scripted ESXi installation or upgrade from CD or DVD fails unless the boot line command uses uppercase characters for the script file name
    When you perform a scripted installation or upgrade from an ESXi 5.0 installer ISO written to CD or DVD along with the installation or upgrade script (kickstart file), the installer recognizes the kickstart file name only in uppercase, even if the file was named in lowercase. For example, if the kickstart file is named ks.cfg, and you use the ks=cdrom:/ks.cfg boot line command to specify the kickstart file location, the installation fails with an error message similar to HandledError: Error (see log for more info): cannot find kickstart file on cd-rom with path -- /ks.cfg.

    Workaround: Use uppercase for the kickstart file name in the boot line command to specify the kickstart file, for example: ks=cdrom:/KS.CFG

  • Misleading error message when you attempt to install VIBs and you use a relative path
    While attempting to install a depot, VIB, or profile by using the esxcli software vib command, if you specify a relative path, the operation fails with the error No such file or directory: '/var/log/vmware/a.vib'

    Workaround: Specify the absolute path when you perform the installation.

  • After you install the vSphere Web Client, a browser opens and displays a blank page
    After you install the vSphere Client, a browser opens and displays a blank page when you click Finish in the installation wizard. The page remains blank and the browser does not connect to the vSphere Administration application.

    Workaround: Close the browser and start the vSphere Administration Application page from the Start menu.

Upgrade

  • Cannot apply ESXi 5.0 Update 2 VIBs through PowerCLI on an ESXi host connected through vCenter Server
    On an ESXi 5.0 host managed by vCenter Server, attempts to apply ESXi 5.0 Update 2 VIBs by using GET-ESXCLI commands on PowerCLI fails with error messages similar to the following:
    2011-11-18T09:53:50Z esxupdate: root: ERROR: Traceback (most recent call last):
    2011-11-18T09:53:50Z esxupdate: root: ERROR: File "/usr/lib/vmware/esxcli-software", line 441, in <module>
    2011-11-18T09:53:50Z esxupdate: root: ERROR: main()
    2011-11-18T09:53:50Z esxupdate: root: ERROR: File "/usr/lib/vmware/esxcli-software", line 432, in main
    2011-11-18T09:53:50Z esxupdate: root: ERROR: ret = CMDTABLE[command](options)
    2011-11-18T09:53:50Z esxupdate: root: ERROR: File "/usr/lib/vmware/esxcli-software", line 329, in VibInstallCmd
    2011-11-18T09:53:50Z esxupdate: root: ERROR: raise Exception("No VIBs specified with -n/--vibname or -v/--viburl.")
    2011-11-18T09:53:50Z esxupdate: root: ERROR: Exception: No VIBs specified with -n/--vibname or -v/--viburl.


    Workaround: None
  • A live update with ESXCLI fails with a VibDownloadError message
    When you perform the following tasks, in sequence, the reboot required transaction fails and a VibDownloadError message appears.

    1. A live install update using the esxcli software profile update or esxcli vib update command.
    2. Before reboot, you perform a transaction that requires a reboot, and the transaction does not complete successfully. One common possible failure is signature verification, which can only be checked after the VIB is downloaded.
    3. Without rebooting the host, you attempt to perform another transaction that requires a reboot. The transaction fails with a VibDownloadError message.

    Workaround: Perform the following steps to resolve the problem.

    1. Reboot the ESXi host to clean up its state.
    2. Repeat the live install.
       
  • During scripted upgrades from ESX/ESXi 4.x to ESXi 5.0, MPX and VML disk device names change, which might cause the upgrade to fail
    MPX and VML disk device names might not persist after a host reboot. If the names change after reboot in a scripted upgrade, the upgrade might be interrupted.

    Workaround: When possible, use Network Address Authority Identifiers (NAA IDs) for disk devices. For machines that do not have disks with NAA IDS, such as Hewlett Packard machines with CCISS controllers, perform the upgrade from a CD or DVD containing the ESXi installer ISO. Alternatively, in a scripted upgrade, you can specify the ESX or ESXi instance to upgrade by using the upgrade command with the --firstdisk= parameter. Installation and upgrade script commands are documented in the vSphere Installation and Setup and vSphere Upgrade documentation.

  • ESX console and esxi_install.log report fails to acquire a DHCP address during upgrade when the ESX system uses manually assigned IP addresses on a subnet without DHCP service
    This situation occurs when an ESX system that has manually assigned IP addresses is run on a subnet that does not have a DHCP server or when the DHCP server is out of capacity. In either case, when the ESX system is upgraded, the system will pause for up to one minute attempting to fetch an IPv4 address from a DHCP server.

    Workaround: None. After the system pauses for up to one minute, it will continue to the successful completion of the upgrade. The system might display a prompt to press Enter to continue. You can either press Enter or ignore the prompt. In either case, the system will proceed with the upgrade after the pause.

Networking

  • Emulex be2net network adapter with device id 0710 fail to be probed in ESXi 5.0 (KB 2041665)*

  • A host profile with vDS configuration created from vCenter Server 4.x might fail to apply gateway settings for stateless booted ESX 5.0 hosts
    While using vSphere Auto Deploy with a 4.x host profile, gateway settings might not be applied for stateless booted ESX 5.0 hosts. As a result, these hosts lose network connectivity and will not be automatically added to vCenter Server 5.0.

    Workaround: To use a 4.x host profile with vDS configuration to boot up stateless ESX 5.0 hosts, perform the following steps:

    1. Boot an ESX 5.0 stateless host configured using Auto Deploy without specifying a 4.x host profile.
    2. Apply the 4.x host profile after the stateless host is added to vCenter Server 5.0.
    3. Create a new host profile from the stateless ESX 5.0 booted host. Use the newly created host profile with vSphere Auto Deploy to boot up stateless ESX 5.0 hosts.

    To boot up ESX 5.0 stateless hosts without vDS configuration, use a 4.x host profile that does not contain vDS settings. You can also disable or remove vDS settings from a 4.x host profile as follows:

    Disable vDS and its associated port groups from the host profile using the Enable/Disable Profile Configuration option at Host Profile > Networking Configuration. Remove vDS and its associated port groups from the host profile using the Edit Profile option at Host Profile > Networking Configuration.

  • Service Console details do not appear in Add Host to vSphere distributed wizard and Manage Hosts wizard
    When adding a 4.x ESX host to a distributed switch, the details of the Service Console network adapters do not appear in the Add Host wizard on the Network Connectivity page in the Virtual Adapter details section. Typically the MAC address, IP address, and Subnet Mask should appear here.

    Workaround: To view the details of a Service Console network adapter, exit the Add Hosts or Manage Hosts wizard, then navigate to Host > Configuration > Networking in the vSphere client.

    If the Service Console network adapter is deployed on a standard switch:

    1. Locate the switch.
    2. Click the "Properties..." button.
    3. Select the Service Console network adapter.
      Make a note of the name of the adapter, which appears in the VSS diagram.

    If the Service Console network adapter is deployed on a distributed switch:

    1. Navigate to the vSphere Distributed Switch tab.
    2. Locate the distributed switch and select "Manage Virtual Adapter...".
    3. Locate and select the Service Console Network adapter.

  • The ESXi Dump Collector server silently exits
    When the ESXi Dump Collector server's port is configured with an invalid value, it exits silently without an error message. Because this port is the port through which the ESXi Dump Collector server receives core dumps from ESXi hosts, these silent exits prevent ESXi host core dumps from being collected. Because error messages are not sent by ESXi Dump Collector to vCenter Server, the vSphere administrator is unaware of this problem. If not resolved, this affects supportability when failures occur on an ESXi host.

    Workaround: Select a port only from within the recommended port range to configure the ESXi Dump Collector server to avoid this failure. Using the default port is recommended.

  • Use of a 10G QLogic NIC, version QLE3142, with a nx_nic driver causes servers to stop functioning during gPXE boot
    If a 10G QLogic NIC, version QLE3142, with a nx_nic driver is used to gPXE boot into the ESXi Stateless boot configuration, the ESXi server stops functioning and fails to boot.

    Workaround: Use other NICs for gPXE boot.

  • Enabling more than 16 VMkernel network adapters causes vSphere vMotion to fail
    vSphere 5.0 has a limit of 16 VMkernel network adapters enabled for vMotion per host. If you enable more than 16 VMkernel network adapters for vMotion on a given host, vMotion to or from that host might fail. The error message says refusing request to initialize 17 stream ip entries, where the number indicates how many VMkernel network adapters you have enabled for vMotion.

    Workaround: Disable vMotion VMkernel network adapters until only 16, at most, are enabled for vMotion.

  • Network I/O control overrides 802.1p tags in outgoing packets in ESXi 5.0
    In ESXi 5.0, the network I/O control bridges the gap between virtual networking Quality of Service (QoS) and physical networking QoS, allowing you to specify an 802.1p tag on a per-resource pool basis.
    A side effect of this functionality is that each resource pool is tagged with a default 802.1p tag (0), even if the tag has not been explicitly set. Class of Service (CoS) bit tagging inside a virtual machine is overridden when leaving the ESXi host if network I/O control is enabled.

    Workaround: None. You can choose to not use network I/O control.

  • IPv6-only VMkernel network adapter configurations not supported in the host profile
    When you use the vSphere Client to set IP configurations for a host profile, you are allowed to set IPv4-only, IPv6-only, or a mix of IPv4 and IPv6 settings for VMkernel network adapters. IPv6-only settings, however, are not supported in host profiles. If you configure VMkernel network adapters with IPv6-only settings, you are asked to provide IPv4 configurations in the host profile answer file.

    Workaround: Perform one of the following tasks:

    • Use only IPv6 settings to configure your VMkernel network adapters through the vSphere Client, and do not use a host profile.
    • Include both IPv6 and IPv4 configurations for VMkernel network adapters when creating and applying the host profile, then disable the IPv4 configurations for the VMkernel network adapters after applying the profile.

  • Some Cisco switches drop packets with priority bit set
    VMware vSphere Network I/O Control allows you to tag outgoing traffic with 802.1p tags. However, some Cisco switches (4948 and 6509) drop the packets if the tagged packets are sent on the native VLAN (VLAN 0).

    Workaround: None.

  • Significant delay in the ESXi boot process during VLAN configuration and driver loading
    ESXi hosts with BE2 or BE3 interfaces encounter a significant delay during driver loading and VLAN configuration. The length of the delay increases with the number of BE2 and BE3 interfaces on the host and can last for several minutes.

    Workaround: None.

  • Adding a network resource pool to a vSphere Distributed Switch fails with the error Cannot complete a vSphere Distributed Switch operation for one or more host members
    This error message indicates that one or more of the hosts on the distributed switch is already associated with the maximum number of network resource pools. The maximum number of network resource pools allowed on a host is 56.

    Workaround: None.

  • Adding a network resource pool to a vSphere Distributed Switch fails with the error vim.fault.LimitExceeded
    This error message indicates that the distributed switch already has the maximum number of network resource pools. The maximum number for network resource pools on a vSphere Distributed Switch is 56.

    Workaround: None.

  • LLDP does not display system names for extreme switches
    By default, system names on extreme switches are not advertised. Unless a system name is explicitly set to advertise on the extreme switch, LLDP cannot display this information.

    Workaround: Run the configure lldp ports <port ID> advertise system-name command to advertise the system name on the extreme switch.

  • Truncating mirrored packets causes ESXi to fail
    When a mirrored packet is longer than the mirrored packet length set for the port mirroring session, ESXi fails. Other operations that truncate packets might also cause ESXi to fail.

    Workaround: Do not set a mirrored packet length for a port mirroring session.

  • Fault Tolerance is not compatible with vSphere DirectPath I/O with vSphere vMotion
    When Fault Tolerance is enabled on a virtual machine, DirectPath I/O with vMotion is inactive for all virtual adapters on the virtual machine.

    Workaround: Disable Fault Tolerance and reboot the virtual machine before enabling DirectPath I/O with vMotion.

  • vSphere DirectPath I/O With vSphere vMotion is disabled by VMCI-based applications
    When you use any VMCI-based application on a Cisco UCS system, DirectPath becomes inactive on all virtual machine network adapters.

    Workaround: Stop using all VMCI-based applications and reboot the virtual machine to restore vSphere DirectPath I/O.

Storage
  • I/O latency threshold appears to be 15ms after you disable I/O metrics for a datastore cluster
    After you disable I/O metrics for a datastore cluster, the Summary page for the datastore cluster continues to display an I/O latency threshold value of 15ms (the default).

    Workaround: None. To view the correct value, select Datastore Cluster > Storage.

  • Link to enter SDRS maintenance mode appears on Summary page of standalone datastore
    Only datastores that are a part of a datastore cluster can successfully enter Storage DRS maintenance mode. However, a link to enter Storage DRS maintenance mode appears on the Summary page for a datastore that is not in a datastore cluster. When you click Enter SDRS maintenance mode for a standalone datastore, the datastore attempts to enter maintenance mode and the task appears to be pending indefinitely.

    Workaround: Cancel the Enter SDRS Maintenance Mode task in the Recent Tasks pane of the vSphere Client.

  • An all paths down (APD) condition during Storage vMotion can result in communication failure between vCenter Server and ESXi host
    If an APD condition occurs when you migrate virtual machines using Storage vMotion, vCenter Server disconnects the host involved in Storage vMotion from the vCenter Server inventory. This condition persists until the background Storage vMotion operation completes. This action could take a few minutes or hours depending on the Storage vMotion operation time. During this time, no other operation can be performed for that particular host from vCenter Server.

    Workaround: None. After the Storage vMotion operation completes, vCenter Server reconnects the host back to the inventory. None of the running virtual machines on non-APD datastores are affected by this failure.

  • Symbolic links added to a datastore might cause the Datastore Browser to incorrectly display datastore contents
    When you add symbolic links at the top level of a datastore, either externally in an NFS server or by logging in to the host, you might not be able to see the correct datastore information, such as its files and folders, when you browse the datastore. Symbolic links referencing incorrect files and folders might cause this problem.

    Workaround: Remove the symbolic links. Do not use symbolic links in datastores.

  • Attempts to add an extent to an ATS-capable VMFS datastore fail
    You can only span an ATS-capable datastore over an ATS-capable device. If you select the device that does not support ATS to extend the ATS-capable datastore, the operation fails. The vSphere Client displays the An error occurred during host configuration message. In the log file, you might also find the following error message: Operation failed, unable to add extent to filesystem.

    Workaround: Before adding an extent to an ATS datastore, verify whether the extent device supports ATS by running the following command:
    esxcli storage core device vaai status get -d=device_ID
    The output must display the following information:
    ATS Status: supported

  • Storage DRS might not behave as expected when balancing I/O load
    When you use IOMeter software to generate I/O load to test Storage DRS, the IOMeter populates the files with only zeros by default. This data does not contain random patterns of ones and zeros, which are present in real data and which are required by Storage DRS to determine the I/O characteristics and performance of the datastore.

    Workaround: When you test Storage DRS load balancing, use real data to populate at least 20 percent of the storage space on the datastore. If you use IOMeter software to generate I/O load, choose a version that allows you to write random patterns of ones and zeros to your files.

  • Names of new virtual machine disks do not appear in Storage DRS initial placement recommendations
    When creating, cloning, or deploying from template a virtual machine on a Storage DRS-enabled datastore cluster, the placement recommendations or faults dialog box does not list the names of the new virtual machine hard disks. The dialog box displays Place new virtual machine hard disk on <datastore name>.

    Workaround: None. When virtual machines are being created, hard disk names are not assigned until the disks are placed. If the virtual machine hard disks are of different size and are placed on different datastores, you can use the Space Utilization before and after statistics to estimate which disk is placed on which datastore.

  • Storage DRS appears to be disabled when you use the Scheduled Task wizard to create or clone a virtual machine
    When you create a scheduled task to clone or create a virtual machine, and select a datastore cluster as the destination storage for the virtual machine files, the Disable Storage DRS check box is always selected. You cannot deselect the Disable Storage DRS check box for the virtual machine in the Scheduled Task wizard.

    Workaround: None. The Disable Storage DRS check box is always selected in the Scheduled Task wizard. However, after the Scheduled Task runs and the virtual machine is created, the automation level of the virtual machine is the same as the default automation level of the datastore cluster.

  • vSphere Client displays an error when you attempt to unmount an NFS datastore with Storage I/O Control enabled
    If you enable Storage I/O Control for an NFS datastore, you cannot unmount that datastore. The following error message appears: The resource is in use.

    Workaround: Before you attempt to unmount the datastore, disable Storage I/O Control.

  • ESXi cannot distinguish between thick provision lazy zeroed and thick provision eager zeroed virtual disks on NFS datastores with Hardware Acceleration support
    When you use NFS datastores that support Hardware Acceleration, the vSphere Client allows you to create virtual disks in Thick Provision Lazy Zeroed (zeroedthick) or Thick Provision Eager Zeroed (eagerzeroedthick) format. However, when you check the disk type on the Virtual Machine Properties dialog box, the Disk Provisioning section always shows Thick Provision Eager Zeroed as the disk format no matter which format you selected during the disk creation. ESXi does not distinguish between lazy zeroed and eager zeroed virtual disks on NFS datastores.

    Workaround: None.

  • After migration, the mode of an IDE RDM disk in physical compatibility does not change to Independent Persistent
    The mode of the IDE RDM disk in physical compatibility does not change to Independent Persistent after you migrate the virtual machine with the disk from the ESX/ESXi 4.x host to ESXi 5.0.

    Workaround: After migration, use the vSphere Client to change the disk's mode to Independent Persistent.

  • Attempts to add a virtual compatibility RDM with a child disk to an existing virtual machine fail
    If you try to add a virtual compatibility RDM that has a child disk to an existing virtual machine, the operation fails. The vSphere Client displays the following error message: Reconfigure failed: vim.fault.DeviceUnsupportedForVmPlatform.

    Workaround: Remove a child disk to be able to add a virtual compatibility RDM.

  • With software FCoE enabled, attempts to display Storage Maps fail with an error message
    This problem affects only those ESXi hosts that have been added to vCenter Server without any previous software FCoE configuration. After you enable software FCoE adapters on these hosts, attempts to display Storage Maps in the vSphere Client fail. The following error message appears: An internal error has occurred: Failed to serialize response.

    Workaround: Configure software FCoE on the ESXi host first, and then add the host to vCenter Server.

  • An NFS datastore with adequate space displays out of space errors
    This problem happens only when you use Remote Procedure Calls (RPC) client sharing and mount multiple NFS volumes from the same NFS server IP address. In this configuration, when one of the NFS volumes runs out of space, other NFS volumes that share the same RPC client might also report no space errors.

    Workaround: Disable RPC client sharing on your host by performing this task:

    1. In the vSphere Client inventory panel, select the host.
    2. Click the Configuration tab, and click Advanced Settings under Software.
    3. Click NFS in the left panel and scroll down to NFS.MaxConnPerIP on the right.
    4. Change the default value to 128.

  • After reboot, stateless host cannot detect iSCSI datastores
    If a stateless host is added to the Cisco Nexus 1000V Series Switch and configured with MTU of 9000, then after reboot, the host cannot detect iSCSI datastores even though it is able to discover the corresponding devices.

    Workaround: To make the datastores visible, click Refresh on the Configuration > Storage screen of the vSphere Client.

Server Configuration
  • A change SATP-PSP to rule for host profiles applied to ESXi host is not reflected in the host after reboot
    After the SAN Array Type Plugin Path Selection Policy (SATP PSP) rule and applying the change and rebooting a host provisioned with Auto Deploy, this new change is not reflected in the SATP PSP for each of the devices. For ESXi hosts not provisioned with Auto Deploy, the SATP PSP change is correctly updated in the host. However, a compliance check of the ESXi host fails compliance check with the host profile.

    Workaround: After applying the host profile to the ESXi host, delete the host profile and extract a new host profile from the ESXi host, then attach it to the host before rebooting. To do this, use the Update from Reference Host feature in the Host Profiles UI. This task deletes the host profile and extracts a new profile from the host while maintaining all the current attachments.

    Use the esxcli command to edit the SATP PSP on the host itself before you extract the host profile. Do not use the host profile editor to edit the SATP PSP.

  • Applying a host profile with a service startup policy of off does not disable the service
    A host profile is created using as a reference host an ESXi host configured with some services disabled and is applied to a host with those services enabled. The host profile application process does not disable the services on the target ESXi host. This situation is commonly encountered by users who have enabled the ESXShell or SSH services on the target ESXi hosts through the Security Profile in the vSphere Client or Troubleshooting Options in the DCUI.

    Workaround: The reboot process disables the services. You can also manually stop the services in the vSphere Client by configuring the host. Perform this procedure for each service.

    1. Select the host in the inventory.
    2. Click the Configuration tab.
    3. Click Security Profile in the Software section.
    4. Click Properties and select the service.
    5. Click Options.
    6. Click Stop and click OK.

  • Host profile answer file status is not updated when switching the attached profile of the host
    When attaching a host profile to a host that was previously attached to another host profile, the answer file status is not updated. If the answer file status is Completed, after attaching another host profile to the host, the answer file status in the host profile view still appears as Completed. The actual status, however, might be changed to Incomplete.

    Workaround: Manually update the answer file status after attaching a host profile.

    1. In vSphere Client, select the newly attached profile in the Host Profiles inventory view.
    2. Click the Hosts and Clusters tab.
    3. Right-click the host from the Entity Name list and select Check Answer File.

    The host profile answer file status is updated.

  • Manually applying a host profile containing a large configuration might timeout
    Applying a host profile that contains a large configuration, for example, a very large number of vSwitches and port groups, might timeout if the target host is either not configured or only partially configured. In such cases, the user sees the Cannot apply the host configuration error message in the vSphere Client, although the underlying process on ESXi that is applying the configuration might continue to run.

    In addition, syslog.log or other log files might have error messages such as the following message:
    Error interacting with configuration file /etc/vmware/esx.conf: Timeout while waiting for lock, /etc/vmware/esx.conf.LOCK, to be released. Another process has kept this file locked for more than 20 seconds. The process currently holding the lock is hostd-worker(5055). This is likely a temporary condition. Please try your operation again.

    This error is caused by contention on the system while multiple operations attempt to gather system configuration information while the host profiles apply operation sets configuration. Because of these errors and other timeout-related errors, even after the host profiles apply operation completes on the system, the configuration captured in the host profile might not be fully applied. Checking the host for compliance shows which parts of the configuration failed to apply, and perform an Apply operation to fix those remaining non-compliance issues.

    Workaround: Perform one of the following:

    • ESXi hosts not provisioned with Auto Deploy

      1. Increase the timeout value for the apply operation by adding the following entry in the /etc/vmware/hostd/cmdMo.xml file:

        <managedObject id="2">
        <type> vim.profile.host.profileEngine.HostProfileManager </type>
        <moId> ha-hostprofileengine-hostprofilemanager </moId>
        --> <timeOutInSeconds> xxxx </timeOutInSeconds> <--****
        <version> vim.version.dev </version>
        <cmd> /usr/bin/sh </cmd>
        <arg id="1"> -l </arg>
        <arg id="2"> -c </arg>
        <arg id="3"> /usr/lib/vmware/hostd/hmo/hostProfileEngine.py --cgi </arg>
        </managedObject>


        Where xxxx is the timeout value in seconds. By default, the apply operation times out in 10 minutes. This entry lets you set a longer timeout. For example, a value of 3600 increases the timeout to 1 hour. The value you enter might vary depending on the specific host profile configuration. After you set a high enough value, the apply operation timeout error no longer appears and the task is visible in the vSphere Client until it is complete.
      2. Restart hostd.
    • Hosts provisioned with Auto Deploy

      1. Reboot the ESXi host provisioned with Auto Deploy.
      2. For ESXi hosts provisioned with Auto Deploy, ensure that the answer file is complete by performing the Update Answer File operation on the ESXi host and then rebooting.

        The configuration in the host profile and answer file is applied on the system during initialization. Large configurations might take longer to boot, but it can be significantly faster than manually applying the host profile through the vSphere client.
  • Host Profiles compliance check fails for reference host with newly created profile
    A compliance check for a newly configured host profile for example, configured with iSCSI, might fail if the answer file is not updated before checking compliance.

    Workaround: Update the answer file for the profile before performing a compliance check.

  • A host profile fails to apply if syslog logdir set to datastore without a path
    If the esxcli command or vSphere Client is used to set the syslog directory to a datastore without an additional path, a host profile extracted from that system fails to apply to other hosts.

    For example, the following configures the system in a way that triggers this condition:
    esxcli system syslog config set --logdir /vmfs/volumes/datastore1

    Similarly, setting the Syslog.global.logDir to datastore1 in the Advanced Settings dialog of the host's Configuration tab also triggers this condition.

    Workaround:Perform one of the following:

    • Modify Syslog.global.logDir in the Advanced Settings dialog to have a value of "DATASTORE_NAME /" instead of "DATASTORE_NAME" before extracting the host profile.
    • Edit the host profile such that the Advanced configuration option for Syslog.global.logDir has a value of "DATASTORE_NAME /" instead of "DATASTORE_NAME".

  • Applying a host profile might recreate vSwitches and portgroups
    The vSwitches and portgroups of a host might be recreated when a host profile is applied. This can occur even if the host is compliant with the host profile.

    This occurs when the portgroupprofile policy options are set to use the default values. This setting leads to an issue where the comparison between the profile and the host configuration might incorrectly fail when the profile is applied. At this time, the compliance check passes. The comparison failure causes the apply profile action to recreate the vSwitches and portgroups. This affects all subprofiles in portgroupprofile.

    Workaround: Change the profile settings to match the desired settings instead of selecting to use the default.

  • VMware embedded SNMP agent reports incorrect software installed date through hrSWInstalledTable from HOST-RESOURCES-MIB
    When you poll for installed software (hrSWInstalledTable RFC 2790) by using the VMware embedded SNMP agent, the installation date shown for user-installed software is not correct, because the hrSWInstalledTable from HOST-RESOURCES-MIB reports hrSWInstalledDate incorrectly.

    Workaround: To retrieve the correct installation date, use the esxcli command esxcli software vib list.

vCenter Server and vSphere Client

  • Unknown device status in the vCenter Server Hardware Status tab*
    In the Sensors view of the vCenter Server Hardware Status tab, the status for some PCI devices is displayed as Unknown Unknown #<number>. The latest PCI IDs for some devices are not listed in the /usr/share/hwdata/pci.ids file on the ESXi host. vCenter Server lists devices with missing IDs as unknown.

    The unknown status is not critical and the list of PCI IDs is updated regularly in major vSphere releases.

  • Database error while reregistering AutoDeploy on vCenter Virtual Appliance (VCVA) (KB 2014087).
  • The snmpwalk command returns an error message when you run it without using -t option
    When the snmpwalk command is run without using the -t and -r options for polling SNMP data, the VMware embedded SNMP agent does not show complete data and displays the error message, No response from host.

    Workaround: When you run the snmpwalk command, use the -t option to specify the timeout interval and the -r option to set the number of retries. For example: snmpwalk -m all -c public -v1 host-name -r 2 -t 10 variable-name.

  • The vCLI command to clear the embedded SNMP agent configuration resets the source of indications and clears trap filters
    The vicfg-snmp -r vCLI command to clear the embedded SNMP agent configuration resets the source of events or traps to the default, indications, and clears all trap filters.

    Workaround: None.

  • Enabling the embedded SNMP agent fails with Address already in use error
    When you configure a port other than udp/161 for the embedded SNMP agent while the agent is not enabled, the agent does not check whether the port is in use. This might result in a port conflict when you enable the agent, producing an Address already in use error message.

    Workaround: Enable the embedded SNMP agent before configuring the port.

Virtual Machine Management
  • Driver availability for xHCI controller for USB 3.0 devices
    Virtual hardware version 8 includes support for the xHCI controller and USB 3.0 devices. However, an xHCI driver might not be available for many operating systems. Without a driver installed in the guest operating system, you cannot use USB 3.0 devices. No drivers are known to be available for the Windows operating system at this time. Contact the operating system vendor for availability of a driver. When you create or upgrade virtual machines with Windows guest operating systems, you can continue using the existing EHCI+UHCI controller, which supports USB 1.1 and 2.0 devices for USB configuration from an ESXi host or a client computer to a virtual machine. If your Windows virtual machine has xHCI and EHCI+UHCI USB controllers, newly added USB 1.1 and USB 2.0 devices will be connected to xHCI and will not be detected by the guest.

    Workaround: Remove the xHCI controller from the virtual machine's configuration to connect USB devices to EHCI+UHCI.

  • Linux kernels earlier than 2.6.27 do not report nonpowers of 2 cores per socket
    Beginning with ESXi 5.0, multicore virtual CPU support allows nonpowers of 2 cores per socket values. Linux kernels earlier than 2.6.27 report only powers of 2 values for cores per socket correctly. For example, some Linux Guest operating systems might not report any physical ID information when you set numvcpus = 6 and cpuid.coresPerSocket = 3 in the .vmx file. Linux kernels 2.6.28 and later report the CPU and core topology correctly.

    Workaround: None

  • When you hot add memory to virtual machines with Linux 64 bit, Windows 7 or Windows 8, 32-bit guest operating systems, you cannot increase existing virtual memory to more than 3GB
    The following conditions apply to hot adding memory to virtual machines with Linux 64 bit, Windows 7 and Windows 8 32 bit guest operating systems.

    • If the powered-on virtual machine has less than 3GB of memory, you cannot hot add memory in excess of 3GB.
    • If the virtual machine has 1GB, you can add 2GBs.
    • If the virtual machine has 2GB, you can add 1GB.
    • If the virtual machine has 3444MB of memory, you can add 128MB.
    • If the powered-on virtual machine has exactly 3GB memory, you cannot hot add any memory.

    If the powered-on virtual machine has more than 3GB of memory, you can increase the virtual machine memory to 16 times the initial virtual machine power-on size or to the hardware version limit, whichever is smaller. The hardware version limit is 255GB for hardware version 7 and 1011GB for hardware version 8.

    Linux 64 bit and 32 bit Windows 7 and Windows 8 guest operating systems freeze when memory grows from less than or equal to 3GB to greater than 3GB while the virtual machine is powered on. This vSphere restriction ensures that you do not trigger this bug in the guest operating system.

    Workaround: None.

  • CPU hot add error on hardware version 7 virtual machines
    Virtual CPU hot add is supported with the multicore virtual CPU feature for hardware 8 virtual machines.
    For hardware version 7 virtual machines with cores per socket greater than 1, when you enable CPU hot add in the Virtual Machine Properties dialog box and try to hot add virtual CPUs, the operation fails and a CPU hot plug not supported for this virtual machine error message appears.

    Workaround: To use the CPU hot-add feature with hardware version 7 virtual machines, power off the virtual machine and set the number of cores per socket to 1.
    For best results, use hardware version 8 virtual machines.

  • Hot-adding memory to a Windows 2003 32-bit system that uses the November 20, 2007 LSISAS driver can cause the virtual machine to stop responding
    The November 20th, 2007 LSI-SAS driver cannot correctly address memory above 3GB if such memory is not present at system startup. When you hot-add memory to a system that has less than 3GB of memory before the hot-add, but more than 3GB of memory after hot-add, the Windows state is corrupted and eventually causes Windows to stop responding.

    Workaround: Use the latest LSI SAS driver available from the LSI website. Do not use the LSISAS1068 virtual adapter for Windows 2003 virtual machines.

  • Incorrect IPv6 address appears on the Summary tab on MacOS X Server 10.6.5 and later guest operating systems
    When you click View All on the Summary tab in the vSphere Client, the list of IPv6 addresses includes an incorrect address for the link local address. You see the incorrect address when you run the ifconfig file and compare the output of the command with the list of addresses in the vSphere Client. This incorrect information also appears when you run the vim-cmd command to get the GuestInfo data.

    Workaround: None

  • Creating a large number of virtual machines simultaneously causes file operations to fail
    When you create a large number of virtual machines simultaneously that reside within the same directory, the storage system becomes overwhelmed and some file operations fail. A vim.fault.CannotAccessFile error message appears and the create virtual machine operation fails.

    Workaround: Create additional virtual machines in smaller batches of, for example, 64, or try creating virtual machines in different datastores or within different directories on the same datastore.

  • USB devices passed through from an ESXi host to a virtual machine might disconnect during migration with vMotion
    When a USB device is passed through to a virtual machine from an ESXi host and the device is configured to remain connected during migration with vMotion, the device might disconnect during the vMotion operation. The devices can also disconnect if DRS triggers a migration. When the devices disconnect, they revert to the host and are no longer connected to the virtual machine. This problem occurs more often when you migrate virtual machines that have multiple USB devices connected, but occasionally happens when one or a small number of devices are connected.

    Workaround: Migrate the virtual machine back to the ESXI host to which the USB devices are physically attached and reconnect the devices to the virtual machine.

  • A virtual machine with an inaccessible SCSI passthrough device fails to power on
    If a SCSI passthrough device attached to a virtual machine has a device backing that is inaccessible from the virtual machine's host, the virtual machine fails to power on with the error, An unexpected error was received from the ESX host while powering on VM.

    Workaround: Perform one of the following procedures:

    • If the virtual machine's host has a physical SCSI device, change the device backing of the SCSI passthrough device to the host's physical SCSI device, and power on the virtual machine.
    • If the host does not have a physical SCSI device, remove the SCSI passthrough device from the virtual machine and power it on.

  • The VMware Tools system tray icon might incorrectly show the status as Out of date
    If a virtual machine is using VMware Tools that was installed with vSphere 4.x, the system tray icon in the guest operating system incorrectly shows the status as Out-of-date. In the vSphere 5.0 Client and vSphere Web Client, the Summary tab for the virtual machine shows the status as Out-of-date (OK. This version is supported on the existing host, but upgrade if new functionality does not work). VMware Tools installed with vSphere 4.x is supported and does not strictly require an upgrade for vSphere 5.0.

    Workarounds: In vSphere 5.0, use the Summary tab for the virtual machine in the vSphere Client or the vSphere Web Client to determine the status of VMware Tools. If the status is Out-of-date and you do not want to upgrade, you can use the following settings to disable the upgrade prompts and warning icon in the guest operating system:

    • If the virtual machine is set to automatically upgrade VMware Tools but you want the virtual machine to remain at the lowest supported version, set the following property as an advanced configuration parameter: set tools.supportedOld.autoupgrade to FALSE. This setting also disables the exclamation point icon in the guest, which indicates that VMware Tools is unsupported.
    • If VMware Tools is out of date and you want to disable the exclamation mark icon that appears on the VMware Tools icon in the system tray, set the following property as an advanced configuration parameter: set tools.supportedOld.warn to FALSE.

    Neither of these settings affect the behavior of VMware Tools when the Summary tab shows the status as Unsupported or Error. In these situations, the exclamation mark icon appears and VMware Tools is automatically upgraded (if configured to do so), even when the advanced configuration settings are set to FALSE. You can set advanced configuration parameters either by editing the virtual machine's configuration file, .vmx or by using the vSphere client or the vSphere Web Client to edit the virtual machine settings. On the Options tab, select Advanced > General, and click Configuration Parameters.

  • Check and upgrade Tools during power cycling feature does not work in ESXi 5.0 and later
    In ESX/ESXi 4.1, the Check and upgrade Tools during power cycling option was available to upgrade VMware Tools when the virtual machine shut down. This feature does not work in ESXi 5.0 and later. Ignore any documentation procedures related to this feature.

    Workaround: Install VMware Tools manually.

  • Mac OS X guest operating systems with high CPU and memory usage might experience kernel panic during virtual machine suspend or resume or migration operations
    Following virtual machine suspend or resume operations or migration with vMotion on a host under a heavy CPU and memory load, the Translation Lookaside Buffer (TLB) invalidation request might time out. In such cases, the Mac OS X guest operating system stops responding and a variant of one of the following messages is written to the vmware.log file:
    The guest OS panicked. The first line of the panic report is: Panic(CPU 0): Unresponsive processor
    The guest OS panicked. The first line of the panic report is: panic(cpu 0 caller 0xffffff8000224a10): "pmap_flush_tlbs()timeout: " "cpu(s) failing to respond to interrupts,pmap=0xffffff800067d8a0 cpus_to_respond=0x4"@/SourceCache/xnu/xnu-1504.7.4/osfmk/x86_64/pmap.c:2710

    Workaround: Reduce the CPU and memory load on the host or reduce the virtual CPU count to 1.

  • Virtual machine clone or relocation operations from ESXi 5.0 to ESX/ESXi 4.1 fail if replication is enabled
    If you use the hbr enablereplication command to enable replication on a virtual machine that resides on an ESXi 5.0 host and clone the virtual machine to an ESX/ESXi 4.1 or earlier host, validation fails with an operation is not supported error message. Cloning of ESXi 5.0 virtual machines on ESX/ESXi 4.1 hosts is not supported.

    Workaround: Select one of the following workarounds:

    • Clone the virtual machine onto an ESXi 5.0 host
    • Clone or relocate a new virtual machine on an ESX/ESXi 4.1 host.
  • Cannot set VMware Tools custom scripts through VMware-Toolbox UI in Linux guest operating system when the custom scripts contain non-ASCII characters in their path
    Non-ASCII characters appear as square boxes with an X in them in the VMware Tools Properties window in Linux guest operating systems when the system locale is zh_CN.gb18030, ja_JP.eucjp, or ko_KR.euckr. In such cases, you cannot set custom VMware Tools scripts .

    Workaround: Perform one of the following tasks:

    • Change the directory name where custom VMware Tools scripts are located so that it contains ASCII characters only.
    • Set custom VMware Tools scripts by entering the vmware-toolbox-cmd script command at a shell prompt.

  • Non-ASCII DNS suffixes are not set correctly after customizing Windows XP and Windows 2003
    If you enter a non-ASCII DNS suffix in the Network Properties DNS tab when you use the Customization Specification Wizard to customize Windows XP or Windows 2003, the customization is reported as successful but the non-ASCII DNS suffix is not set correctly.

    Workaround: Set the DNS suffix manually in Windows XP and Windows 2003.

  • VMware embedded SNMP agent reports incorrect status for processors in the hrDeviceStatus object of the HOST-RESOURCES-MIB module
    When reporting the system details, the VMware embedded SNMP agent shows an incorrect status for processors. The SNMP agent reports the processor status as Unknown for the hrDeviceStatus object in HOST-RESOURCES-MIB. The ESX/net-snmp implementation of HOST-RESOURCES-MIB does not return the hrDeviceStatus object, which is equivalent to reporting an unknown status.

    Workaround: Use either CIM APIs or SMBIOS data to check the processor status.

  • The snapshot disk path in the .vmsd snapshot database file and the parent path in the delta disk descriptor file are not updated after migration
    When snapshot.redoNotWithParent is set to TRUE and you change the snapshotDirectory setting from, for example, Database A to Database B, you might see an error message that says, Detected an invalid snapshot configuration. This problem occurs when both of the following conditions exist:

    • You revert to a previous snapshot in the snapshot tree and create new snapshots from that snapshot point. The result is a nonlinear snapshot tree hierarchy.
    • The disk links in a disk chain span multiple datastores and include both the source and destination datastores. This situation occurs if you change the snapshotDirectory settings to point to different datastores more than once and take snapshots of the virtual machine between the snapshotDirectory changes. For example, you take snapshots of a virtual machine with snapshotDirectory set to Datastore A, revert to a previous snapshot, then change the snapshotDirectory settings to Datastore B and take additional snapshots. Now you migrate the virtual disk from Datastore B to Datastore A.

    The best practice is to retain the default setting, which stores the parent and child snapshots together in the snapshot directory. Avoid changing the snapshotDirectory settings or taking snapshots between datastore changes. If you set snapshot.redoNotWithParent to TRUE, perform a full storage migration to a datastore that is currently not used by the virtual machine.

    Workaround: Manually update the disk path references to the correct datastore path in the snapshot database file and disk descriptor file.

Migration
  • During Daylight Saving Time (DST) transitions, the time axis on performance charts is not updated to reflect the DST time change.
    For example, local clocks in areas that observe DST were set forward 1 hour on Sunday, March 27, 2011 at 3am. The tick markers on the time axis of performance charts should have been labeled ..., 2:00, 2:20, 2:40, 4:00, 4:20, ..., omitting ticks for the hour starting at 3am. The labels actually displayed are ..., 2:00, 2:20, 2:40, 3:00, 3:20, 3:40, 4:00, 4:20, ....

    Workaround: None

  • Virtual machine disks retain their original format after a Storage vMotion operation in which the user specifies a disk format change
    When you attempt to convert the disk format to Thick Provision Eager Zeroed during a Storage vMotion operation of a powered-on virtual machine on an host running ESX/ESXi 4.1 or earlier, the conversion does not happen. The Storage vMotion operation succeeds, but the disks continue to retain their original disk format because of an inherent limitation of ESX/ESXi 4.1 and earlier. If the same operation is performed on a virtual machine on an ESXi 5.0 host, the conversion happens correctly.

    Workaround: None.

VMware HA and Fault Tolerance
  • vSphere HA fails to restart a virtual machine that was being migrated using vMotion when a host failure occurred.
    While a virtual machine is being migrated from one host to another, the original host might fail, become unresponsive, or lose access to the datastore containing the configuration file of the virtual machine. If such a failure occurs and the vMotion subsequently also fails, vSphere HA might not restart the virtual machine and might unprotect it.

    Workaround: If the virtual machine fails and vSphere HA does not power it back on, power the virtual machine back on manually. vSphere HA then protects the virtual machine.

Guest Operating System

  • USB 3.0 devices might not work with Windows 8 or Windows Server 2012 virtual machines*
    When you use USB 3.0 devices with Windows 8 or Windows Server 2012 virtual machines while using a Windows or Linux operating system as the Client, error messages similar to the following might be displayed:
    Port Reset Failed
    The USB set SEL request failed

    Workaround: None

  • PXE boot or reboot of RHEL 6 results in a blank screen
    When you attempt to use the Red Hat boot loader to PXE boot into RHEL 6 in an EFI virtual machine, the screen becomes blank after the operating system is selected. This problem is also seen if you remove the splashscreen directive from the grub.conf file in a regular installation of RHEL 6 and reboot the virtual machine.

    Workaround: Verify that the splashscreen directive is present and references a file that is accessible to the boot loader.

  • Multiple vNICs randomly disconnect upon reboot for Mac OS X 10.6.x
    When rebooting the Mac OS X 10.6, an incorrect guest link state is shown for n >=3 e1000 vNICs.

    If n(e1000)=3, the guest link state is: 2(Connected), 1 (disconnected).
    If n(e1000)=5, the guest link state is: 3(Connected), 2(disconnected), and so on.

    Workaround: Manually activate and deactivate the adapters with the ifconfig utility. The manual activation and deactivation will not persist through the next reboot process.

Supported Hardware
  • IBM x3550 M2 Force Legacy Video on Boot must be disabled
    The IBM x3550 M2 has a firmware option called Force Legacy Video on Boot that enables legacy INT10h video support when booting from the Unified Extensible Firmware Interface (UEFI). This option is not compatible with ESXi 5.0 and must be disabled.

    Workaround: When booting the IBM x3550 M2 from UEFI, press F1 to enter the firmware setup and select System Settings > Legacy Support > Force Legacy Video on Boot and click Disable.

Miscellaneous
    • Inaccurate monitoring of sensor data in the vCenter Server Hardware Status (KB 2012998).

    • Location of log files has changed from /var/log to /var/run/log
      ESXi 5.0 log files are located in /var/run/log. For backward compatibility, the log file contains links from the previous location, /var/log, to the most recent log files in the current location, /var/run/log. The log file does not contain links to rotated log files.

      Workaround: None.

    • On Linux virtual machines, you cannot install OSPs after uninstalling VMware Tools with the tar installer
      After you uninstall VMware Tools that is installed with the tar installer on a Linux virtual machine, files are left on the system. In this situation, you cannot install OSPs.

      Workaround: Run the following command: rm -rf /usr/lib/vmware-tools /etc/vmware-tools
    • When proxy server is enabled in Internet Explorer LAN settings, PowerCLI sometimes fails to add an online depot
      An online depot can be added in PowerCLI using the Add-ESXSoftwareDepot cmdlet. Under certain conditions in which the proxy server is enabled for the machine being used, PowerCLI fails to add the online depot in its session.
      This failure can occur only if all the following conditions exist.
      • The customer's site requires an HTTP proxy for accessing the Web.
      • The customer hosts a depot on their internal network.
      • The customer's proxy cannot connect to the depot on the internal network.

      Workaround:
      1. Disable proxy server in the IE LAN settings.
      2. Add the online depot in PowerCLI.