VMware

VMware ESXi 5.5 Update 1 Release Notes

VMware ESXi™ 5.5 Update 1 | 11 MAR 2014 | Build 1623387

Last updated: 8 APR 2014

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of VMware ESXi contains the following enhancements:

  • VMware Virtual SAN Virtual SAN 5.5 is a new hypervisor-converged storage tier that extends the vSphere Hypervisor to pool server-side magnetic disks (HDDs) and solid-state drives (SSDs). By clustering server-side HDDs and SSDs, Virtual SAN creates a distributed shared datastore designed and optimized for virtual environments. Virtual SAN is a standalone product that is sold separate from vSphere and requires its own license key 

  • Resolved Issues This release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 5.5

Features and known issues of ESXi 5.5 are described in the release notes for each release. To view release notes for earlier releases of ESXi 5.5, see the VMware vSphere 5.5 Release Notes.

Internationalization

VMware vSphere 5.5 Update 1 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Compatibility and Installation

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Client and the vSphere Web Client are packaged on the vCenter Server ISO. You can install one or both clients by using the VMware vCenter™ Installer wizard.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 5.5.1 adds support for ESXi 5.5 Update 1 and vCenter Server 5.5 Update 1 releases.
For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

ESXi and Virtual SAN Compatibility

Virtual SAN does not support clusters that are configured with ESXi hosts earlier than 5.5 Update 1. Make sure all hosts in the Virtual SAN cluster are upgraded to ESXi 5.5 Update 1 before enabling Virtual SAN. vCenter Server should also be upgraded to 5.5 Update 1.

Test Releases of Virtual SAN
Upgrade of Virtual SAN cluster from Virtual SAN Beta to Virtual SAN 5.5 is not supported.
Disable Virtual SAN Beta, and perform fresh installation of Virtual SAN 5.5 for ESXi 5.5 Update 1 hosts. If you were testing Beta versions of Virtual SAN, VMware recommends that you recreate data that you want to preserve from those setups on vSphere 5.5 Update 1. For more information, see Retaining virtual machines of Virtual SAN Beta cluster when upgrading to vSphere 5.5 Update 1 (KB 2074147).

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 5.5 Update 1, use the ESXi 5.5 Update 1 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 5.5 Update 1, use the ESXi 5.5 Update 1 information in the VMware Compatibility Guide.

Some devices are deprecated and no longer supported on ESXi 5.5 and later. During the upgrade process, the device driver is installed on the ESXi 5.5.x host. It might still function on ESXi 5.5.x, but the device is not supported on ESXi 5.5.x. For a list of devices that have been deprecated and are no longer supported on ESXi 5.5.x, see the VMware Knowledge Base article Deprecated devices and warnings during ESXi 5.5 upgrade process.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 5.5 Update 1, use the ESXi 5.5 Update 1 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 5.5 Update 1. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 5.5 Update 1, upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

vSphere Client Connections to Linked Mode Environments with vCenter Server 5.x

vCenter Server 5.5 can exist in Linked Mode only with other instances of vCenter Server 5.5.

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

Migrating Third-Party Solutions

You cannot directly migrate third-party solutions installed on an ESX or ESXi host as part of a host upgrade. Architectural changes between ESXi 5.1 and ESXi 5.5 result in the loss of third-party components and possible system instability. To accomplish such migrations, you can create a custom ISO file with Image Builder. For information about upgrading your host with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 5.5.x supports only CPUs with LAHF and SAHF CPU instruction sets. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5.5.x. If your host hardware is not compatible, a purple screen appears with a message about incompatibility. You cannot install or upgrade to vSphere 5.5.x.

Upgrades for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

Supported Upgrade Paths for Upgrade to ESXi 5.5 Update 1 :

Upgrade Deliverables

Supported Upgrade Tools

Supported Upgrade Paths to ESXi 5.5 Update 1

ESX/ESXi 4.0:
Includes
ESX/ESXi 4.0 Update 1
ESX/ESXi 4.0 Update 2

ESX/ESXi 4.0 Update 3
ESX/ESXi 4.0 Update 4

ESX/ESXi 4.1:
Includes
ESX/ESXi 4.1 Update 1
ESX/ESXi 4.1 Update 2

ESX/ESXi 4.1 Update 3

 

ESXi 5.0:
Includes
ESXi 5.0 Update 1

ESXi 5.0 Update 2
ESXi 5.0 Update 3

ESXi 5.1
Includes
ESXi 5.1 Update 1
ESXi 5.1 Update 2

ESXi 5.5

VMware-VMvisor-Installer-5.5.0.update01-1623387.x86_64.iso

 

  • VMware vCenter Update Manager
  • CD Upgrade
  • Scripted Upgrade

Yes

Yes

Yes

Yes

Yes

update-from-esxi5.5-5.5_update01.zip
  • VMware vCenter Update Manager
  • ESXCLI
  • VMware vSphere CLI

No

No

Yes*

Yes*

Yes

Using patch definitions downloaded from VMware portal (online) VMware vCenter Update Manager with patch baseline

No

No

No

No

Yes


*Note: Upgrade from ESXi 5.0.x, or ESXi 5.1.x, to ESXi 5.5 Update 1 using update-from-esxi5.5-5.5_update01.zip is supported only with ESXCLI. You need to run the esxcli software profile update --depot=<depot_location> --profile=<profile_name> command to perform the upgrade. For more information, see the ESXi 5.5.x Upgrade Options topic in the vSphere Upgrade guide.

Open Source Components for VMware vSphere 5.5 Update 1

The copyright statements and licenses applicable to the open source software components distributed in vSphere 5.5 Update 1 are available at http://www.vmware.com/download/vsphere/open_source.html, on the Open Source tab. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • vSphere Web Client. Because Linux platforms are no longer supported by Adobe Flash, vSphere Web Client is not supported on the Linux OS. Third party browsers that add support for Adobe Flash on the Linux desktop OS might continue to function.

    VMware vCenter Server Appliance. In vSphere 5.5, the VMware vCenter Server Appliance meets high-governance compliance standards through the enforcement of the DISA Security Technical Information Guidelines (STIG). Before you deploy VMware vCenter Server Appliance, see the VMware Hardened Virtual Appliance Operations Guide for information about the new security deployment standards and to ensure successful operations.

  • vCenter Server database. vSphere 5.5 removes support for IBM DB2 as the vCenter Server database.

  • VMware Tools. Beginning with vSphere 5.5, all information about how to install and configure VMware Tools in vSphere is merged with the other vSphere documentation. For information about using VMware Tools in vSphere, see the vSphere documentation. Installing and Configuring VMware Tools is not relevant to vSphere 5.5 and later.

  • VMware Tools. Beginning with vSphere 5.5, VMware Tools do not provide ThinPrint features.

  • vSphere Data Protection. vSphere Data Protection 5.1 is not compatible with vSphere 5.5 because of a change in the way vSphere Web Client operates. vSphere Data Protection 5.1 users who upgrade to vSphere 5.5 must also update vSphere Data Protection to continue using vSphere Data Protection.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi550-Update01 contains the following individual bulletins:

ESXi550-201403201-UG: Updates ESXi 5.5 esx-base vib
ESXi550-201403202-UG: Updates ESXi 5.5 tools-light vib
ESXi550-201403203-UG: Updates ESXi 5.5 rste vib
ESXi550-201403204-UG: Updates ESXi 5.5 net-e1000e vib
ESXi550-201403205-UG: Updates ESXi 5.5 scsi-mpt2sas vib
ESXi550-201403206-UG: Updates ESXi 5.5 lsi-msgpt3 vib
ESXi550-201403207-UG: Updates ESXi 5.5 mtip32xx-native vib
ESXi550-201403208-UG: Updates ESXi 5.5 sata-ahci vib
ESXi550-201403209-UG: Updates ESXi 5.5 scsi-megaraid-sas vib
ESXi550-201403210-UG: Updates ESXi 5.5 net-igb vib
ESXi550-201403211-UG: Updates ESXi 5.5 net-tg3 vib

Patch Release ESXi550-Update01 Security-only contains the following individual bulletins:

ESXi550-201403101-SG: Updates ESXi 5.5 esx-base vib
ESXi550-201403102-SG: Updates ESXi 5.5 tools-light vib

Patch Release ESXi550-Update01 contains the following image profiles:

ESXi-5.5.0-20140302001-standard
ESXi-5.5.0-20140302001-no-tools

Patch Release ESXi550-Update01 Security-only contains the following image profiles:

ESXi-5.5.0-20140301001s-standard
ESXi-5.5.0-20140301001s-no-tools

For information on patch and update classification, see KB 2014447.

Resolved Issues

This section describes resolved issues in this release:

CIM and API Issues

  • Unable to get IPMI sensor data
    When you run exscli hardware ipmi sdr list command, you might see an error similar to the following for used up resources:
    No records or incompatible version or read failed

    This issue is resolved in this release.
  • vmklinux_9:ipmi_thread of vmkapimod displays CPU usage as hundred percent for one hour
    In an ESXi host, when reading the Field Replaceable Unit (FRU) inventory data using the Intelligent Platform Management Interface (IPMI) tool, the vmklinux_9:ipmi_thread of vmkapimod displays the CPU usage as hundred percent. This is because the IPMI tool uses the Read FRU Data command multiple times to read large inventory data.

    This issue is resolved in this release.
  • Unable to disable weak ciphers on CIM port 5989
    To disable cipher block chaining (CBC) algorithms for Payment Card Industry (PCI) compliance, you might need to disable weak ciphers on CIM port 5989. This is not permitted. You can update the configuration in sfcb.cfg to disable weak ciphers by running the following commands:

    # vi /etc/sfcb/sfcb.cfg
    sslCipherList: HIGH:!DES-CBC3-SHA
    # /etc/init.d/sfcbd-watchdog restart

    This issue is resolved in this release.

  • Query operation fails when you query CIM_System using EMC PowerPath
    When PowerPath queries the CIM_System under /VMware/esxv2/, the operation fails and an error is reported from CIM server. The error is similar to the following:

    ThreadPool --- Failed to enqueue request. Too many queued requests already: vmwaLINUX
    ThreadPool --- Failed to enqueue request. Too many queued requests already: vmware_base, active 5, queued 11 .\TreeViewHostDiscovery.cpp 611

    This issue is resolved in this release.

  • Core dumps from the ethernet provider is observed while updating sensor data
    While updating the sensor data in the hardware status tab on an IBM x3650M3 server, Small Footprint CIM Broker (SFCB) core dumps from the ethernet provider is observed. The Hardware Status tab does not display data even after multiple attempts.

    This issue is resolved in this release.

Miscellaneous Issues

  • RAM disk for SNMP trap files is not created when host reboots
    RAM disk for SNMP traps is not created when the host is rebooted or restarting the management agents on an ESXi host from Direct Console User Interface. When an object is created in /var/spool/snmp (directory, file, link or other) of the ESXi host, then on starting SNMP service, RAM disk for SNMP traps is not created.

    This issue is resolved in this release.

  • VMX process might fail error message when a virtual machine is powered off
    The VMX process might fail when you attempt to power off a virtual machine.
    Error message similar to the following might be written to vmware.log file:

    Unexpected signal: 11

    This issue is resolved in this release.
  • JavaFX application items are not displayed correctly in a 3D enabled virtual machine
    When 3D is enabled in the virtual machine settings, the user interface components of JavaFX application are not displayed correctly.

  • This issue is resolved in this release.
  • Multiple virtual disks configured in lsilogic virtual adapter might be unresponsive when the adapter waits for for I/O completion on all the targets available
    When the lsilogic virtual adapter executes an scsi target reset command, it waits for I/O completion on all the targets available in the virtual adapter. This might cause virtual machines with multiple virtual disks configured in lsilogic virtual adapter to become unresponsive.

    This issue is resolved in this release.

Networking Issues

  • VMXNET3 adapter resets in Windows Server 2008 R2 due to frequent changes in RSS
    When Receive Side Scaling is enabled on a multi vCPU Windows virtual machine, you see NetPort messages for repeating MAC addresses indicating that ports are being disabled and then re-enabled. In the vmkernel.log, you see messages similar to:

    2013-07-08T06:11:58.158Z cpu4:337203)NetPort: 1424: disabled port 0x1000020
    2013-07-08T06:11:58.177Z cpu4:337203)NetPort: 1237: enabled port 0x1000020 with mac 00:50:56:9e:50:24
    2013-07-08T06:12:46.191Z cpu1:337203)NetPort: 1424: disabled port 0x1000020
    2013-07-08T06:12:46.211Z cpu7:337203)NetPort: 1237: enabled port 0x1000020 with mac 00:50:56:9e:50:24

    The vmware.log file of virtual machines that use these MAC addresses contain corresponding events similar to:

    2013-07-08T06:18:20.175Z| vcpu-1| Ethernet4 MAC Address: 00:50:56:9e:50:24
    2013-07-08T06:18:20.199Z| vcpu-1| VMXNET3 user: Ethernet4 Driver Info: version = 833450 gosBits = 2 gosType = 2, gosVer = 24848, gosMisc = 212
    2013-07-08T06:18:36.165Z| vcpu-6| Ethernet4 MAC Address: 00:50:56:9e:50:24
    2013-07-08T06:18:36.187Z| vcpu-6| VMXNET3 user: Ethernet4 Driver Info: version = 833450 gosBits = 2 gosType = 2, gosVer = 24848, gosMisc = 212

    This issue is resolved in this release. The VMXNET3 network driver is updated in this release.

  • ESXi 5.x host with virtual machines using an E1000 or E1000e virtual adapter fails with a purple diagnostic screen
    ESXi host experiences a purple diagnostic screen with errors for E1000PollRxRing and E1000DevRx when the rxRing buffer fills up and the max Rx ring is set to more than 2. The next Rx packet received that is handled by the second ring is NULL, causing a processing error.
    The purple diagnostic screen displays entries similar to the following:

    @BlueScreen: #PF Exception 14 in world 63406:vmast.63405 IP 0x41801cd9c266 addr 0x0
    PTEs:0x8442d5027;0x383f35027;0x0;
    Code start: 0x41801cc00000 VMK uptime: 1:08:27:56.829
    0x41229eb9b590:[0x41801cd9c266]E1000PollRxRing@vmkernel#nover+0xdb9 stack: 0x410015264580
    0x41229eb9b600:[0x41801cd9fc73]E1000DevRx@vmkernel#nover+0x18a stack: 0x41229eb9b630
    0x41229eb9b6a0:[0x41801cd3ced0]IOChain_Resume@vmkernel#nover+0x247 stack: 0x41229eb9b6e0
    0x41229eb9b6f0:[0x41801cd2c0e4]PortOutput@vmkernel#nover+0xe3 stack: 0x410012375940
    0x41229eb9b750:[0x41801d1e476f]EtherswitchForwardLeafPortsQuick@<None>#<None>+0xd6 stack: 0x31200f9
    0x41229eb9b950:[0x41801d1e5fd8]EtherswitchPortDispatch@<None>#<None>+0x13bb stack: 0x412200000015
    0x41229eb9b9c0:[0x41801cd2b2c7]Port_InputResume@vmkernel#nover+0x146 stack: 0x412445c34cc0
    0x41229eb9ba10:[0x41801cd2ca42]Port_Input_Committed@vmkernel#nover+0x29 stack: 0x41001203aa01
    0x41229eb9ba70:[0x41801cd99a05]E1000DevAsyncTx@vmkernel#nover+0x190 stack: 0x41229eb9bab0
    0x41229eb9bae0:[0x41801cd51813]NetWorldletPerVMCB@vmkernel#nover+0xae stack: 0x2
    0x41229eb9bc60:[0x41801cd0b21b]WorldletProcessQueue@vmkernel#nover+0x486 stack: 0x41229eb9bd10
    0x41229eb9bca0:[0x41801cd0b895]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x10041229eb9bd20
    0x41229eb9bd20:[0x41801cc2083a]BH_Check@vmkernel#nover+0x185 stack: 0x41229eb9be20
    0x41229eb9be20:[0x41801cdbc9bc]CpuSchedIdleLoopInt@vmkernel#nover+0x13b stack: 0x29eb9bfa0
    0x41229eb9bf10:[0x41801cdc4c1f]CpuSchedDispatch@vmkernel#nover+0xabe stack: 0x0
    0x41229eb9bf80:[0x41801cdc5f4f]CpuSchedWait@vmkernel#nover+0x242 stack: 0x412200000000
    0x41229eb9bfa0:[0x41801cdc659e]CpuSched_Wait@vmkernel#nover+0x1d stack: 0x41229eb9bff0
    0x41229eb9bff0:[0x41801ccb1a3a]VmAssistantProcessTask@vmkernel#nover+0x445 stack: 0x0
    0x41229eb9bff8:[0x0]<unknown> stack: 0x0

    This issue is resolved in this release.

  • IP address displayed on DCUI changes on reset when management traffic is enabled on multiple VMkernel ports
    Whenever you reset a management network where the management traffic is enabled on multiple VMkernel ports, the IP address displayed on the Direct Console User Interface (DCUI) changes.

    This issue is resolved in this release.
  • ESXi host displays a purple diagnostic screen with exception 14 error
    ESXi hosts with DvFilter module might display a purple diagnostic screen with a backtrace similar to the following:

    2013-07-18T06:41:39.699Z cpu12:10669)0x412266b5bbe8:[0x41800d50b532]DVFilterDispatchMessage@com.vmware.vmkapi#v2_1_0_0+0x92d stack: 0x10
    2013-07-18T06:41:39.700Z cpu12:10669)0x412266b5bc68:[0x41800d505521]DVFilterCommBHDispatch@com.vmware.vmkapi#v2_1_0_0+0x394 stack: 0x100
    2013-07-18T06:41:39.700Z cpu12:10669)0x412266b5bce8:[0x41800cc2083a]BH_Check@vmkernel#nover+0x185 stack: 0x412266b5bde8, 0x412266b5bd88,

    This issue is resolved in this release.

  • ESXi hosts might fail with a purple screen due to race conditions in ESXi TCP/IP stack
    ESXi hosts might fail with a purple screen and display error messages similar to the following:

    2013-02-22T15:33:14.296Z cpu8:4104)@BlueScreen: #PF Exception 14 in world 4104:idle8 IP 0x4180083e796b addr 0x1
    2013-02-22T15:33:14.296Z cpu8:4104)Code start: 0x418007c00000 VMK uptime: 58:11:48:48.394
    2013-02-22T15:33:14.298Z cpu8:4104)0x412200207778:[0x4180083e796b]ether_output@<None>#<None>+0x4e stack: 0x41000d44f360
    2013-02-22T15:33:14.299Z cpu8:4104)0x4122002078b8:[0x4180083f759d]arpintr@<None>#<None>+0xa9c stack: 0x4100241a4e00

    This issue occurs due to race conditions in ESXi TCP/IP stack.

    This issue is resolved in this release.

  • Intel e1000e network interface driver might stop responding on received (RX) traffic
    Intel e1000e network interface driver might stop responding on received (RX) traffic.

    This issue is resolved in this release.

  • ESXi hosts might fail with a purple screen when Network Healthcheck feature is enabled
    When the Network Healthcheck feature is enabled and it handles many Healthcheck packets, the L2Echo function might not be able to handle high network traffic and the ESXi hosts might fail with a purple diagnostic screen similar to the following:

    2013-06-27T10:19:16.074Z cpu4:8196)@BlueScreen: PCPU 1: no heartbeat (2/2 IPIs received)
    2013-06-27T10:19:16.074Z cpu4:8196)Code start: 0x418024600000 VMK uptime: 44:20:54:02.516
    2013-06-27T10:19:16.075Z cpu4:8196)Saved backtrace from: pcpu 1 Heartbeat NMI
    2013-06-27T10:19:16.076Z cpu4:8196)0x41220781b480:[0x41802468ded2]SP_WaitLockIRQ@vmkernel#nover+0x199 stack: 0x3b
    2013-06-27T10:19:16.077Z cpu4:8196)0x41220781b4a0:[0x4180247f0253]Sched_TreeLockMemAdmit@vmkernel#nover+0x5e stack: 0x20
    2013-06-27T10:19:16.079Z cpu4:8196)0x41220781b4c0:[0x4180247d0100]MemSched_ConsumeManagedKernelMemory@vmkernel#nover+0x1b stack: 0x0
    2013-06-27T10:19:16.080Z cpu4:8196)0x41220781b500:[0x418024806ac5]SchedKmem_Alloc@vmkernel#nover+0x40 stack: 0x41220781b690...
    2013-06-27T10:19:16.102Z cpu4:8196)0x41220781bbb0:[0x4180247a0b13]vmk_PortOutput@vmkernel#nover+0x4a stack: 0x100
    2013-06-27T10:19:16.104Z cpu4:8196)0x41220781bc20:[0x418024c65fb2]L2EchoSendPkt@com.vmware.net.healthchk#1.0.0.0+0x85 stack: 0x4100000
    2013-06-27T10:19:16.105Z cpu4:8196)0x41220781bcf0:[0x418024c6648e]L2EchoSendPort@com.vmware.net.healthchk#1.0.0.0+0x4b1 stack: 0x0
    2013-06-27T10:19:16.107Z cpu4:8196)0x41220781bfa0:[0x418024c685d9]L2EchoRxWorldFn@com.vmware.net.healthchk#1.0.0.0+0x7f8 stack: 0x4122
    2013-06-27T10:19:16.108Z cpu4:8196)0x41220781bff0:[0x4180246b6c8f]vmkWorldFunc@vmkernel#nover+0x52 stack: 0x0


    This issue is resolved in this release.
  • Unused vSphere Distributed Switch (VDS) ports are not cleared from the .dvsData directory on the datastores
    During vMotion of a virtual machine that has a vNIC connected to VDS, port files from the vMotion source host are not cleared from .dvsData directory even after a while.

    This issue is resolved in this release.

  • The return value of net.throughput.usage in vCenter performance chart and VMkernel are contradictory
    In vCenter performance chart, the net.throughput.usage related value is in kilobytes, but the same value is returned in bytes in theVMkernel. This leads to incorrect representation of values in the vCenter performance chart.

    This issue is resolved in this release.

  • TCP Segmentation Offload (TSO) capability of tg3 NICs might cause ESXi hosts to fail
    When the TSO capability is enabled in tg3 driver, the tg3 NICs might corrupt the data going through them.

    This issue is resolved in this release. The TSO capability is disabled for tg3 NICs.
  • Network packets drop is incorrectly reported in esxtop between two virtual machines on the same ESXi host and vSwitch
    When two virtual machines are configured with e1000 driver on the same vSwitch on a host, the network traffic between the two virtual machines might report significant packet drop in esxtop. This is happening because during the reporting there was no accounting for split packets when TSO is enabled from guest.

    This issue is resolved in this release. The VMXNET3 network driver is updated in this release.

Security Issues

  • Update to libxslt
    The ESXi userworld libxslt package is updated.
  • Update to NTP daemon
    The NTP daemon is updated to resolve a security issue.
    The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2013-5211 to this issue.
    Note: A workaround for this issue is documented in KB 2070193.
  • Update to glibc packages
    The ESXi glibc-2.5 package is updated to resolve a security issue.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2013-4332 to this issue.

Server Configuration Issues

  • Inconsistent CPU utilization value is reported by esxtop command-line tool
    On machines with Hyper Threading enabled, the vSphere Client core utilization value of the CPU is doubled compared to the value found while using esxtop core utilization.
  • This issue is resolved in this release.
  • Incorrect unit received on esxcli network nic coalesce get command
    The unit received on entering the command esxcli network nic coalesce get output is milliseconds. The correct unit is microseconds.

    This issue is resolved in this release.
  • Unable to access the VMFS datastore or some files
    You might find the Virtual Machine File System datastore missing from the Datastore tab of vCenter Server or an event similar to the following is displayed in the Events tab:

    XXX esx.problem.vmfs.lock.corruptondisk.v2 XXX or At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too.

    The following message is logged in the vmkernel.log file:

    [lockAddr 36149248] Invalid state: Owner 00000000-00000000-0000-000000000000 mode 0 numHolders 0 gblNumHolders 4294967295ESC[7m2013-05-12T19:49:11.617Z cpu16:372715)WARNING: DLX: 908: Volume 4e15b3f1-d166c8f8-9bbd-14feb5c781cf ("XXXXXXXXX") might be damaged on the disk. Corrupt lock detected at offset 2279800: [type 10c00001 offset 36149248 v 6231, hb offset 372ESC[0$

    You might also see the following message logged in the vmkernel.log file:

    2013-07-24T05:00:43.171Z cpu13:11547)WARNING: Vol3: ValidateFS:2267: XXXXXX/51c27b20-4974c2d8-edad-b8ac6f8710c7: Non-zero generation of VMFS3 volume: 1

    This issue is resolved in this release.

  • ESXi local disks are unavailable after you install EMC PowerPath on HP server
    After installing EMC PowerPath, local datastores are not claimed by any Multi Path Plugin (MPP). When the claim rules are run, if a path matches a claim rule, that path is offered to the MPP. In vSphere 5.1, if the MPP returns a failure, then that path will be matched with other claim rules, and incase of a match, the path is offered to the MPP in those claim rules. If a path is not claimed by any MPP, it is offered to NMP due to the catch-all claim rule.

    This issue is resolved in this release.

  • ESXi host displays a purple diagnostic screen with error
    During a VMkernel System Information (VSI) call from the userworld program ps, the VMkernel experiences an error when the instance list sent to the kernel is corrupted. The issue occurs if the numParams is too high and the kernel tries to access the array of that index. A purple diagnostic screen with a backtrace is displayed similar to the following:
  • 2013-09-06T00:35:49.995Z cpu11:148536)@BlueScreen: #PF Exception 14 in world 148536:ps IP 0x418004b31a34 addr 0x410019463308

    PTEs:0x4064ffe023;0x2080001023;0x100065023;0x0;

    2013-09-06T00:35:49.996Z cpu11:148536)Code start: 0x418004800000 VMK uptime: 2:11:35:53.448
    2013-09-06T00:35:49.996Z cpu11:148536)0x412250e1bd80:[0x418004b31a34]VSIVerifyInstanceArgs@vmkernel#nover+0xf3 stack: 0x410009979570
    2013-09-06T00:35:49.997Z cpu11:148536)0x412250e1bdd0:[0x418004b31bbb]VSI_CheckValidLeaf@vmkernel#nover+0xca stack: 0x8c8
    2013-09-06T00:35:49.998Z cpu11:148536)0x412250e1be40:[0x418004b32222]VSI_GetInfo@vmkernel#nover+0xb1 stack: 0x4100194612c0
    2013-09-06T00:35:49.999Z cpu11:148536)0x412250e1beb0:[0x418004ca7f31]UWVMKSyscallUnpackVSI_Get@<None>#<None>+0x244 stack: 0x412250e27000
    2013-09-06T00:35:50.000Z cpu11:148536)0x412250e1bef0:[0x418004c79348]User_UWVMKSyscallHandler@<None>#<None>+0xa3 stack: 0x0
    2013-09-06T00:35:50.001Z cpu11:148536)0x412250e1bf10:[0x4180048a8672]User_UWVMKSyscallHandler@vmkernel#nover+0x19 stack: 0xffea48d8
    2013-09-06T00:35:50.001Z cpu11:148536)0x412250e1bf20:[0x418004910064]gate_entry@vmkernel#nover+0x63 stack: 0x0
    2013-09-06T00:35:50.004Z cpu11:148536)base fs=0x0 gs=0x418042c00000 Kgs=0x0

    This issue is resolved in this release.

  • ESXi host becomes unresponsive and disconnects from vCenter Server
    The ESXi host becomes unresponsive and disconnects from vCenter Server. Also, the DCUI and SSH login to the host does not work due to memory leak in lsassd because of offline domains in Active Directory environment.

  • This issue is resolved in this release.
  • Allow virtual machines to display the physical host serial number
    Virtual machines cannot reflect the serials numbers of the physical ESXi hosts.

  • This issue is resolved in this release.
  • ESXi host displays incorrect values for the resourceCpuAllocMax system counter
    When you attempt to retrieve the value for the resourceCpuAllocMax and resourceMemAllocMax system counters against the host system, the ESX host returns incorrect values. This issue is observed when vSphere Client is connected to vCenter Server.

    This issue is resolved in this release.

  • Fibre Channel Host Bus Adapter (HBA) speed is displayed incorrectly
    Fibre Channel Host Bus Adapter (HBA) speed is always displayed incorrectly as zero for some ESXi hosts on Managed Object Browser (MOB).

    This issue is resolved in this release.

  • Multiple ESXi hosts might stop responding in the vCenter Server
    When large number of parallel HTTP GET /folder URL requests are sent to hostd, the hostd service fails. This stops from adding the host back to the VCenter Server. An error message similar to the following might be displayed:
     
    Unable to access the specified host, either it doesn't exist, the server software is not responding, or there is a network problem.
     
    This issue is resolved in this release.
  • Virtual machines with hardware version earlier than 10 cannot connect to NVS network
    Virtual machines with hardware version earlier than 10 cannot connect to NVS (NSX Virtual Switch) network. However, virtual machines with hardware version 4 or more can connect to the NVS network.

  • This issue is resolved in this release.
  • Hostd fails and generates hostd-worker dump
    The ESXi 5.5 host might generate hostd-worker dump after you detach a software iSCSI disk, that is connected with vmknic on vDS. This issue occurs when you attempt to retrieve the latest information on the host.

    This issue is resolved in this release.

  • The hot remove of a shared non persistent disk takes more time when the disk is attached (or shared) with another powered on VM
    When you add shared non persistent read only disks to a virtual machine, the virtual machine might take longer time for the disk removal task operation.

    This issue is resolved in this release.
  • Provisioning and customizing a virtual machine might lose connection from the network
    When you provision and customize a virtual machine from a template on a vDS with Ephemeral Ports, the virtual machine might lose connection from the network.

    The error messages similar to the following might be written to the log files:

    2013-08-05T06:33:33.990Z| vcpu-1| VMXNET3 user: Ethernet1 Driver Info: version = 16847360 gosBits = 2 gosType = 1, gosVer = 0, gosMisc = 0
    2013-08-05T06:33:35.679Z| vmx| Msg_Post: Error
    2013-08-05T06:33:35.679Z| vmx| [msg.mac.cantGetPortID] Unable to get dvs.portId for ethernet0
    2013-08-05T06:33:35.679Z| vmx| [msg.mac.cantGetNetworkName] Unable to get networkName or devName for ethernet0
    2013-08-05T06:33:35.679Z| vmx| [msg.device.badconnect] Failed to connect virtual device Ethernet0.
    2013-08-05T06:33:35.679Z| vmx|

    This issue is resolved in this release.

  • Trap files are created in the ESXi host when the SNMP agent is stopped
    Simple Network Management Protocol (SNMP) creates trap files (.trp) in the /var/spool/snmp folder of ESXi host when the SNMP agent is stopped. When the directory /var/spool/snmp is not successfully deleted by SNMP agent is stopped, the hostd writes trap files (.trp) into /var/spool/snmp,causing the host to run out of inodes and appear as disconnected in vCenter Server. As a result, you might not be able to perform any task on the host.

    This issue is resolved in this release.

  • GuestNicInfo is missing for a powered on virtual machine on a Preboot Execution Environment (PXE) booted host
    GuestNicInfo and GuestStackInfo are not available on a powered on virtual machine when VMware Tools is installed and running.
  • This issue is resolved in this release.

  • Executing ESXCLI commands or using monitoring tools that rely on SNMP agent might result in connection loss to ESXi host
    When you execute ESXCLI commands or if you use monitoring tools that rely on data from the SNMP agent in ESXi, the connection to the ESXi host might be lost resulting due to failure of the hostd service.

    This issue is resolved in this release.

  • The bandwidthCap and throughputCap option of an ESXi host might not work on a guest operating system
    On an ESXi host when bandwidthCap and throughputCap are set at the same time, the I/O throttling option might not work on virtual machines. This happens because of incorrect logical comparison while setting the throttle option in SCSI scheduler.

    This issue is resolved in this release.

  • SNMP traps are not received on hosts where SNMP is enabled and third-party CIM providers are installed on the server
    When the monitored hardware status is changed on an ESXi host on which SNMP is enabled and third-party CIM providers installed, you might not receive SNMP traps. Messages similar to the following are logged in the syslog file:

    2013-07-11T05:24:39Z snmpd: to_sr_type: unable to convert varbind type '71'
    2013-07-11T05:24:39Z snmpd: convert_value: unknown SR type value
    02013-07-11T05:24:39Z snmpd: parse_varbind: invalid varbind with type 0 and value: '2'
    2013-07-11T05:24:39Z snmpd: forward_notifications: parse file '/var/spool/snmp/1373520279_6_1_3582.trp' failed, ignored

    This issue is resolved in this release.

  • Hostd fails while attempting to reset the CPUID mask on a virtual machine
    While attempting to reset the CPUID mask on a virtual machine, hostd crashes when the value of SetRegisterInfo is NULL or an empty string.

    This issue is resolved in this release.

  • ESXi installation fails with 'UnboundLocalError' error
    ESXi installation fails with error : "UnboundLocalError: local variable 'isLocalOnlyAdapter' referenced before assignment"

    This issue is resolved in this release.

  • Host Profile fails to apply the Network Attached Storage (NAS) profile intermittently
    Host fails to apply NasStorageProfile and host is unable to leave maintenance mode during application failure.

    This issue is resolved in this release.

  • Attempts to apply a complex host profile might result in a timeout
    When you apply a complex host profile, for example, the one that contains large number of portgroups and datastores, the operation might time out with an error message similar to the following:

    2013-04-09T15:27:38.562Z [4048CB90 info 'Default' opID=12DA4C3C-0000057F-ee] [VpxLRO] -- ERROR task-302 -- -- vim.profile.host.profileEngine.HostProfileManager.applyHostConfig:

    vmodl.fault.SystemError:
    --> Result:
    --> (vmodl.fault.SystemError) {
    --> dynamicType = ,
    --> faultCause = (vmodl.MethodFault) null,
    --> reason = "",
    --> msg = "A general system error occurred: ",
    --> }

    The hostd default timeout is 10 minutes. As applyHostConfig is not a progressive task, the hostd service is unable to distinguish between failed task and long-running task during hostd timeout. As a result, the hostd service reports that the applyHostConfig has failed.

    This issue is resolved in this release by installing 30-minute timeout as a part of HostProfileManager Managed Object. However, this issue might still occur when you attempt to apply a large host profile and the task might exceed 30-minute timeout limit. To work around this issue, re-apply the host profile.

    Note: The actual trigger for timeout depends on the complexity of the host profile.
  • VMKernel fails when a Virtual Machine Monitor returns an invalid Machine Page Number
    When VMX passes a VPN value to read a page, VMKernel fails to find a valid machine page number for that VPN value which results in the host failing with a purple diagnostic screen.

    This issue is resolved in this release.
  • Adding a new virtual adapter to a vDS might casue the syslog.log file to log an error message
    On an ESXi host configured to use a vSphere Distributed Switch (vDS), if you add a new virtual adapter to the vDS, the syslog.log file might log an error message similar to the following:
    lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs failed Not found
    This error message can be safely ignored and it appears if the SNMP log level is info (default).
    To ensure that this message does not appear in the syslog file, set the SNMP log level to warning or error by running the command localcli system snmp set --loglevel= warning or localcli system snmp set --loglevel=error.

    This issue is resolved in this release.

Storage Issues

  • Output of esxtop performance data might be displayed as zero
    When the output of esxtop performance data is redirected to a CSV formatted file, the esxtop.csv values collected in batch mode might change to zero.The esxtop.csv file might display I/O values similar to the following:

    "09/04/2013 22:00:00","1251.43","4.89","7.12","1839.62","7.19","4.99","1273.05","4.97","7.08""09/04/2013
    22:00:10","1283.92","5.02","7.06","1875.14","7.32","4.89","1290.37","5.04","7.07""09/04/2013
    22:00:20","1286.49","5.03","7.03","1914.86","7.48","4.87","1320.55","5.16","6.90""09/04/2013
    22:00:31","1222.56","4.78","7.44","1775.23","6.93","5.21","1253.87","4.90","7.28""09/04/2013
    22:00:41","1269.87","4.96","7.15","1847.40","7.22","4.97","1267.62","4.95","7.13""09/04/2013
    22:00:51","1291.36","5.04","7.05","1857.97","7.26","4.96","1289.40","5.04","7.08""09/04/2013
    22:01:01","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00""09/04/2013
    22:01:11","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00""09/04/2013
    22:01:22","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00""09/04/2013
    22:01:32","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00""09/04/2013 22:01:42","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00"


    This issue occurs when vSCSI invalid messages are set to True by default instead of False.

    This issue is resolved in this release.

Virtual SAN Issues

  • You cannot hot unplug storage disks claimed by the Advance Host Controller Interface (AHCI) driver when the disks are part of a Virtual SAN disk group
    The AHCI driver claims data or SSD storage disks that are connected to SATA ports on the motherboard or through AHCI-enabled storage adapters. If the disks are claimed by the AHCI driver and are part of a Virtual SAN disk group, you cannot hot unplug them from the ESXi host. If you do so, the host will not be able to detect the disks after you reconnect them.

    This issue is resolved in this release.
  • On SSD hot unplug, the ESXi host might experience a failure.
    When you attempt to hot unplug an SSD, your ESXi host might crash and fail.

    This issue is resolved in this release.
  • Virtual SAN might not detect the disk that you hot unplug and hot reinsert
    If you hot unplug an SSD or non-SSD disk, Virtual SAN might not be able to detect it after a hot reinsert.

    This issue is resolved in this release.
  • ESXi host forms a single node cluster and is unable to join an existing Virtual SAN cluster
    This occurs if vmknic configured for Virtual SAN does not have an IP address when Virtual SAN is enabled. If hostd has not started when Virtual SAN enabled vmknic gets an IP address, the host might form a single node cluster.

    This issue is resolved in this release.
  • A node fails to enter the maintenance mode with Full data evacuate option, if the current capacity usage of the Virtual SAN cluster is more than 70 percent
    This occurs when removing a node from the cluster results in the capacity usage going beyond 70 percent of the Virtual SAN cluster. The cluster capacity refers to the space in the Virtual SAN cluster.

    This issue is resolved in this release.
  • Cloning a virtual machine with a storage policy always transfers the storage policy to the target virtual machine, even when you do not need it.
    When you clone a virtual machine associated with a storage policy and select not to assign the storage policy to the target virtual machine, the cloning process succeeds. However, the storage policy of the original virtual machine will be used for the configuration of the target virtual machine. As a result, the target virtual machine might unintentionally use extra resources on the Virtual SAN datastore.

    This issue is resolved in this release.

vCenter Server and vSphere Web Client Issues

  • Performance statistics calculation for virtual disk throughput might be incorrect by a factor of 1024
    The performance statistics calculation for virtual disk throughput is measured incorrectly in Bytes per second unit.

    This issue is resolved in this release by changing the measurement unit to KiloBytes per second.

Virtual Machine Management Issues

  • Virtual machine fails to boot from CD or DVD when Legacy Floppy Disk is disabled in the BIOS
    Virtual machine might fail to boot if you disable Legacy Floppy Disk in the BIOS. This issue occurs when you follow the steps below:
    1. Create a virtual machine without an operating system.
    2. Use the Virtual Machine Settings to configure the CD or DVD to connect to an ISO image file or a physical drive to install the guest operating system.
    3. Disable Legacy Floppy Disk in the BIOS.
    After you disable Legacy Floppy Disk in the BIOS, the virtual machine beeps twice and fails to boot.

    This issue is resolved in this release.
  • Attempts to export a virtual machine as OVF fail with a timeout error
    When you attempt to export a virtual machine that uses an Ext 2 or Ext 3 file system and has a large sequence of empty blocks on the disk, for example an empty secondary disk with 210 GB and above, in an Open Virtualization Format (OVF), the operation times out.

    This issue is resolved in this release.

VMware HA and Fault Tolerance Issues

  • Attempts to configure HA might fail with error message stating that the HA master agent cannot be found
    You might be unable to configure High Availability (HA). Error messages similar to the following might be written to the vpxd.log file:

    vpxd-1967.log:2013-04-06T01:35:02.156+09:00 [07220 warning 'DAS'] [FdmManager::ReportMasterChange] VC couldn't find a master for cluster HA_GD for 120 seconds
    vpxd-1968.log:2013-04-06T05:02:55.991+09:00 [07320 warning 'DAS'] [FdmManager::ReportMasterChange] VC couldn't find a master for cluster HA_ID for 120 seconds


    This issue occurs when the hostd function runs out of sessions. Error messages similar to the following are written to hostd.log file:

    SOAP session count limit reached.

    This issue is resolved in this release.

Upgrade and Installation Issues

  • ESXi installation fails with UnboundLocalError error
    ESXi installation fails with error : UnboundLocalError: local variable 'isLocalOnlyAdapter' referenced before assignment

    This issue is resolved in this release.

VMware Tools Issues

  • Attempts to unregister VSS driver might cause vCenter Protect Agent to display an warning message during VMware Tools update
    When you attempt to update VMware Tools on a Windows guest operating system with VMware Tools build that has an unsigned comreg.exe file and unregister VSS driver, the vCenter Protect Agent might display an warning message similar to the following:

    comreg.exe was trying to run

    This issue is resolved in this release by including VMware signed comreg.exe file.
  • Microsoft Office fails to save files to a shared directory
    When you are saving files from a Microsoft Office 2007 or Microsoft Office 2010 application to a shared directory on a virtual machine protected by VMware vShield Endpoint and an agentless antivirus solution, you see errors similar to the following:

    File is currently in use. Try again later.
    Cannot save the file to this location.

    Files saved to the share are empty and 0KB in size.

    This issue is resolved in this release.

  • Errors while updating VMware Tools on RHEL 6.2
    An error similar to the following might be displayed while updating an earlier version of VMware Tools to version 9.0.5 in a RHEL 6.2 (Red Hat Linux Enterprise) virtual machine using RPM (Red Hat Package Manager):

    Error: Package: kmod-vmware-tools-vsock-9.3.3.0-2.6.32.71.el6.x86_64.3.x86_64 (RHEL6-isv)
    Requires: vmware-tools-vsock-common = 9.0.1
    Installed: vmware-tools-vsock-common-9.0.5-1.el6.x86_64 (@/vmware-tools-vsock-common-9.0.5-1.el6.x86_64)
    vmware-tools-vsock-common = 9.0.5-1.el6
    Available: vmware-tools-vsock-common-8.6.10-1.el6.x86_64 (RHEL6-isv)
    vmware-tools-vsock-common = 8.6.10-1.el6
    Available: vmware-tools-vsock-common-9.0.1-3.x86_64 (RHEL6-isv)
    vmware-tools-vsock-common = 9.0.1-3

    This issue is resolved in this release.

  • The VMware Tools service fails when retrieving virtual machine information with the --cmd argument
    When you run a vmtoolsd query such as vmtoolsd --cmd "info-get guestinfo.ovfEnv", the VMware Tools service might fail. This issue is known to occur on VMware Tools versions 9.0.1 and 9.0.5.

    This issue is resolved in this release.
  • Prebuilt kernel modules are missing for Oracle Linux 5.x
    VMware Tools is updated to provide Oracle Linux 5.x with 2.6.39-200/400 prebuilt kernel modules.

    This issue is resolved in this release.
  • When you install VMware Tools using Operating System Specific packages the /tmp/vmware-root directory fills up with vmware-db.pl.* files
    When you install VMware Tools using OSPs you can see an increase in the number of log files present in the /tmp/vmware-root directory. This issue is observed on SUSE Linux Enterprise Server 11 Service Pack 2 and RedHat Enterprise Linux 6 virtual machines.

    This issue is resolved in this release.
  • vSphere Web Client might fail to recognize that VMware Tools is installed for Linux guest operating systems
    vSphere Web Client might fail to recognize that VMware Tools is installed for Linux guest operating systems.
    Error message similar to the following might be displayed under Summary tab in vSphere Web Client:
    VMware Tools is not installed on this virtual machine.

    This issue is resolved in this release.
  • Maxon Cinema 4D application might fail in vSGA mode
    Maxon Cinema 4D might fail in Virtual Shared Graphics Acceleration (vSGA) mode when you attempt to click any UI item of the application due to an issue in the VMware OpenGL driver.

    This issue is resolved in this release.
  • Rendering related errors might be seen when you run Petrel 3D application in virtual machine
    You might observe rendering related errors when you run Petrel 3D application in virtual machine due to an issue with OpenGL graphics driver.

    This issue is resolved in this release.
  • User is forcefully logged out of Windows 8 and Windows 8.1 virtual machines during VMware Tools upgrade
    While upgrading the VMware Tools in Windows 8 and Windows 8.1 virtual machines, the user is automatically logged out of the virtual machines.

    This issue is resolved in this release.

Known Issues

Known issues not previously documented are marked with the * symbol. The known issues are grouped as follows:

Installation Issues
  • Simple Install fails on Windows Server 2012
    Simple Install fails on Windows Server 2012 if the operating system is configured to use a DHCP IP address

    Workaround: Configure the Windows 2012 Server to use a static IP address.

  • Installation on Software iSCSI LUN fails with the error Expecting 2 bootbanks, found 0
    The full error that accompanies this problem is:
    Error (see log for more info):
    Expecting 2 bootbanks, found 0.

    This problem occurs when the first network adapter is configured to boot from an iBFT iSCSI. The iBFT IP settings are used to configure the VMkernel network port that is created to access the iSCSI boot disk. In this case, the port is the management network port, because the first adapter is used for management traffic.

    When the installation is approximately 90 percent complete, the installer reconfigures the management interface with DHCP. As a result, the iBFT IP settings are lost and the TCP connection with the iSCSI boot target breaks.

    Workaround: Take one of the following actions:

    • If multiple network adapters are available, use the second network adapter to access the iSCSI boot disk.
    • If only one network adapter is available, configure iBFT to use DHCP. The iSCSI target should be on the management network. If the iSCSI target is on a different subnet, the default VMkernel gateway can route both management and iSCSI traffic.

  • If you use preserve VMFS with Auto Deploy Stateless Caching or Auto Deploy Stateful Installs, no core dump partition is created
    When you use Auto Deploy for Stateless Caching or Stateful Install on a blank disk, an MSDOS partition table is created. However, no core dump partition is created.

    Workaround: When you enable the Stateless Caching or Stateful Install host profile option, select Overwrite VMFS, even when you install on a blank disk. When you do so, a 2.5GB coredump partition is created.

  • During scripted installation, ESXi is installed on an SSD even though the --ignoressd option is used with the installorupgrade command
    In ESXi 5.5, the --ignoressd option is not supported with the installorupgrade command. If you use the --ignoressd option with the installorupgrade command, the installer displays a warning that this is an invalid combination. The installer continues to install ESXi on the SSD instead of stopping the installation and displaying an error message.

    Workaround: To use the --ignoressd option in a scripted installation of ESXi, use the install command instead of the installorupgrade command.

  • Delay in Auto Deploy cache purging might apply a host profile that has been deleted
    After you delete a host profile, it is not immediately purged from the Auto Deploy. As long as the host profile is persisted in the cache, Auto Deploy continues to apply the host profile. Any rules that apply the profile fail only after the profile is purged from the cache.

    Workaround: You can determine whether any rules use deleted host profiles by using the Get-DeployRuleSet PowerCLI cmdlet. The cmdlet shows the string deleted in the rule's itemlist. You can then run the Remove-DeployRule cmdlet to remove the rule.

  • Applying host profile that is set up to use Auto Deploy with stateless caching fails if ESX is installed on the selected disk
    You use host profiles to set up Auto Deploy with stateless caching enabled. In the host profile, you select a disk on which a version of ESX (not ESXi) is installed. When you apply the host profile, an error that includes the following text appears.
    Expecting 2 bootbanks, found 0

    Workaround: Select a different disk to use for stateless caching, or remove the ESX software from the disk. If you remove the ESX software, it becomes unavailable.

  • Installing or booting ESXi version 5.5.0 fails on servers from Oracle America (Sun) vendors
    When you perform a fresh ESXi version 5.5.0 installation or boot an existing ESXi version 5.5.0 installation on servers from Oracle America (Sun) vendors, the server console displays a blank screen during the installation process or when the existing ESXi 5.5.0 build boots. This happens because servers from Oracle America (Sun) vendors have a HEADLESS flag set in the ACPI FADT table, even though they are not headless platforms.

    Workaround: When you install or boot ESXi 5.5.0, pass the boot option ignoreHeadless="TRUE".

Upgrade Issues

  • Patch name for Patch ID ESXi550-Update01 is displayed as see description*
    When using the vSphere Update Manager, you might notice the patch name for ESXi 5.5 Update 1 roll-up bundle, Patch ID ESXi550-Update01, displays see description, (No KB).
    There is no functional impact due to the missing patch name. The roll-up bundle details can be found in KB 2065832.

    Workaround: None
  • If you use ESXCLI commands to upgrade an ESXi host with less than 4GB physical RAM, the upgrade succeeds, but some ESXi operations fail upon reboot
    ESXi 5.5 requires a minimum of 4GB of physical RAM. The ESXCLI command-line interface does not perform a pre-upgrade check for the required 4GB of memory. You successfully upgrade a host with insufficient memory with ESXCLI, but when you boot the upgraded ESXi 5.5 host with less than 4GB RAM, some operations might fail.

    Workaround: None. Verify that the ESXi host has more than 4GB of physical RAM before the upgrade to version 5.5.

  • After upgrade from vCenter Server Appliance 5.0.x to 5.5, vCenter Server fails to start if an external vCenter Single Sign-On is used
    If the user chooses to use an external vCenter Single Sign-On instance while upgrading the vCenter Server Appliance from 5.0.x to 5.5, the vCenter Server fails to start after the upgrade. In the appliance management interface, the vCenter Single Sign-On is listed as not configured.

    Workaround: Perform the following steps:

    1. In a Web browser, open the vCenter Server Appliance management interface (https://appliance-address:5480).
    2. On the vCenter Server/Summary page, click the Stop Server button.
    3. On the vCenter Server/SSO page, complete the form with the appropriate settings, and click Save Settings.
    4. Return to the Summary page and click Start Server.

  • When you use ESXCLI to upgrade an ESXi 4.x or 5.0.x host to version 5.1 or 5.5, the vMotion and Fault Tolerance Logging (FT Logging) settings of any VMKernel port group are lost after the upgrade
    If you use the command esxcli software profile update <options> to upgrade an ESXi 4.x or 5.0.x host to version 5.1 or 5.5, the upgrade succeeds, but the vMotion and FT Logging settings of any VMkernel port group are lost. As a result, vMotion and FT Logging are restored to the default setting (disabled).

    Workaround: Perform an interactive or scripted upgrade, or use vSphere Update Manager to upgrade hosts. If you use the esxcli command, apply vMotion and FT Logging settings manually to the affected VMkernel port group after the upgrade.

  • When you upgrade vSphere 5.0.x or earlier to version 5.5, system resource allocation values that were set manually are reset to the default value
    In vSphere 5.0.x and earlier, you modify settings in the system resource allocation user interface as a temporary workaround. You cannot reset the value for these settings to the default without completely reinstalling ESXi. In vSphere 5.1 and later, the system behavior changes, so that preserving custom system resource allocation settings might result in values that are not safe to use. The upgrade resets all such values.

    Workaround: None.

  • IPv6 settings of virtual NIC vmk0 are not retained after upgrade from ESX 4.x to ESXi 5.5
    When you upgrade an ESX 4.x host with IPv6 enabled to ESXi 5.5 by using the --forcemigrate option, the IPv6 address of virtual NIC vmk0 is not retained after the upgrade.

    Workaround: None.

vCenter Single Sign-On Issues
  • Error 29107 appears during vSphere Web Client upgrade from 5.1Update 1a to 5.5
    During an upgrade of a vSphere Web Client from version 5.1 Update U1a to version 5.5, Error 29107 appears if the vCenter Single Sign-On service that was in use before the upgrade is configured as High Availability Single Sign-On.

    Workaround: Perform the upgrade again. You can run the installer and select Custom Install to upgrade only the vSphere Web Client.

  • Cannot change the password of administrator@vsphere.local from the vSphere Web Client pulldown menu
    When you log in to the vCenter Single Sign-On server from the vSphere Web Client, you can perform a password change from the pulldown menu. When you log in as administrator@vsphere.local the Change Password option is greyed out.

    Workaround:

    1. Select the Manage tab, and select vCenter Single Sign-On > Users and Groups.
    2. Right-click the administrator user and click Edit User.
    3. Change the password.

Networking Issues

  • Static routes associated with vmknic interfaces and dynamic IP addresses might fail to appear after reboot*
    After you reboot the host, static routes that are associated with VMkernel network interface (vmknic) and dynamic IP address might fail to appear.
    This issue occurs due to a race condition between DHCP client and restore routes command. The DHCP client might not finish acquiring an IP address for vmknics when the host attempts to restore custom routes during the reboot process. As a result, the gateway might not be set up and the routes are not restored.

    Workaround: Run the esxcfg-route –r command to restore the routes manually.
  • An ESXi host stops responding after being added to vCenter Server by its IPv6 address
    When you add an ESXi host to vCenter Server by IPv6 link-local address of the form fe80::/64, within a short time the host name becomes dimmed and the host stops responding to vCenter Server.

    Workaround: Use a valid IPv6 address that is not a link-local address.

  • The vSphere Web Client lets you configure more virtual functions than are supported by the physical NIC and does not display an error message
    In the SR-IOV settings of a physical adapter, you can configure more virtual functions than are supported by the adapter. For example, you can configure 100 virtual functions on a NIC that supports only 23, and no error message appears. A message prompts you to reboot the host so that the SR-IOV settings are applied. After the host reboots, the NIC is configured with as many virtual functions as the adapter supports, or 23 in this example. The message that prompts you to reboot the host persists when it should not appear.

    Workaround: None

  • On an SR-IOV enabled ESXi host, virtual machines associated with virtual functions might not start
    When SR-IOV is enabled on an ESXi host 5.1 or later with Intel ixgbe NICs, if several virtual functions are enabled in the environment, some virtual machines might fail to start.
    The vmware.log file contains messages similar to the following:
    2013-02-28T07:06:31.863Z| vcpu-1| I120: Msg_Post: Error
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ PCIPassthruChangeIntrSettings: 0a:17.3 failed to register interrupt (error code 195887110)
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5122262e-ab950f8e-cd4f-b8ac6f917d68/VMLibRoot/VMLib-RHEL6.2-64-HW7-default-3-2-1361954882/vmwar
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support.
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86]
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support".
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement.

    Workaround: Reduce the number of virtual functions associated with the affected virtual machine before starting it.

  • On an Emulex BladeEngine 3 physical network adapter, a virtual machine network adapter backed by a virtual function cannot reach a VMkernel adapter that uses the physical function as an uplink
    Traffic does not flow between a virtual function and its physical function. For example, on a switch backed by the physical function, a virtual machine that uses a virtual function on the same port cannot contact a VMkernel adapter on the same switch. This is a known issue of the Emulex BladeEngine 3 physical adapters. For information, contact Emulex.

    Workaround: Disable the native driver for Emulex BladeEngine 3 devices on the host. For more information, see VMware KB 2044993.

  • The ESXi Dump Collector fails to send the ESXi core file to the remote server
    The ESXi Dump Collector fails to send the ESXi core file if the VMkernel adapter that handles the traffic of the dump collector is configured to a distributed port group that has a link aggregation group (LAG) set as the active uplink. An LACP port channel is configured on the physical switch.

    Workaround: Perform one of the following workarounds:

    • Use a vSphere Standard Switch to configure the VMkernel adapter that handles the traffic for the ESXi Dump Collector with the remote server.
    • Use standalone uplinks to handle the traffic for the distributed port group where the VMkernel adapter is configured.
  • If you change the number of ports that a vSphere Standard Switch or vSphere Distributed Switch has on a host by using the vSphere Client, the change is not saved, even after a reboot
    If you change the number of ports that a vSphere Standard Switch or vSphere Distributed Switch has on an ESXi 5.5 host by using the vSphere Client, the number of ports does not change even after you reboot the host.

    When a host that runs ESXi 5.5 is rebooted, it dynamically scales up or down the ports of virtual switches. The number of ports is based on the number of virtual machines that the host can run. You do not have to configure the number of switch ports on such hosts.

    Workaround: None in the vSphere Client.

Server Configuration Issues

  • Menu navigation problem is experienced When Direct Control User Interface is accessed from a serial console*
    When Direct Control User Interface is accessed from a serial console, the Up and Down arrow keys do not work while navigating to the menu and the user is forcefully logged out of the DCUI configuration screen.

    Workaround: Stop the DCUI process. The DCUI process will be restarted automatically.

  • Host profiles might incorrectly appear as compliant after ESXi hosts are upgrade to 5.5 Update 1 followed by changes in host configuration*
    If an ESXi host that is compliant with an host profile is updated to ESXi 5.5 Update 1 followed by some changes in host configuration and you re-check the compliance of the host with the host profile, the profile is incorrectly reported to be compliant.

    Workaround:
    • In vSPhere Client, navigate to the host profile that has the issue and run Update profile From Reference Host.
    • In vSPhere Web Client, navigate to host Profile that has the issue, click Copy settings from host, select the host from which you want to copy the configuration settings and click OK.
  • Host Profile remediation fails with vSphere Distributed Switch
    Remediation errors might occur when applying a Host Profile with a vSphere Distributed Switch and a virtual machine with Fault Tolerance is in a powered off state on a host that uses the distributed switch in that Host Profile.

    Workaround: Move the powered off virtual machines to another host in order for the Host Profile to succeed.

  • Noncompliance messages appear after using Auto Deploy for stateless caching or stateful installs to USB
    After a host profile is edited to enable stateless caching to the USB disk on the host, the host profile receives compliance errors when attempting to remidiate. The host is rebooted and caching finishes. After checking compliance, the following compliance error is received:
    Host state does not match specification

    Workaround: No workaround is required. The message is incorrect.

  • Host profile receives firewall settings compliance errors when you apply ESX 4.0 or ESX 4.1 profile to ESXi 5.5.x host
    If you extract a host profile from an ESX 4.0 or ESX 4.1 host and attempt to apply it to an ESXi 5.5.x host, the profile remediation succeeds. The compliance check receives firewall settings errors that include the following:
    Ruleset LDAP not found
    Ruleset LDAPS not found
    Ruleset TSM not found
    Ruleset VCB not found
    Ruleset activeDirectorKerberos not found

    Workaround: No workaround is required. This is expected because the firewall settings for an ESX 4.0 or ESX 4.1 host are different from those for an ESXi 5.5.x host.

  • Changing BIOS device settings for an ESXi host might result in invalid device names
    Changing a BIOS device setting on an ESXi host might result in invalid device names if the change causes a shift in the <segment:bus:device:function> values assigned to devices. For example, enabling a previously-disabled integrated NIC might shift the <segment:bus:device:function> values assigned to other PCI devices, causing ESXi to change the names assigned to these NICs. Unlike previous versions of ESXi, ESXi 5.5 attempts to preserve devices names through <segment:bus:device:function> changes if the host BIOS provides specific device location information. Due to a bug in this feature, invalid names such as vmhba1 and vmnic32 are sometimes generated.

    Workaround: Rebooting the ESXi host once or twice might clear the invalid device names and restore the original names. Do not run an ESXi host with invalid device names in production.

Storage Issues

  • Attempts to perform live storage vMotion of virtual machines with RDM disks might fail*
    Storage vMotion of virtual machines with RDM disks might fail and virtual machines might be seen in powered off state. Attempts to power on the virtual machine fails with the following error:

    Failed to lock the file

    Workaround: None.
  • Renamed tags appear as missing in the Edit VM Storage Policy wizard
    A virtual machine storage policy can include rules based on datastore tags. If you rename a tag, the storage policy that references this tag does not automatically update the tag and shows it as missing.

    Workaround: Remove the tag marked as missing from the virtual machine storage policy and then add the renamed tag. Reapply the storage policy to all out-of-date entities.

  • A virtual machine cannot be powered on when the Flash Read Cache block size is set to 16KB, 256KB, 512KB, or 1024KB
    A virtual machine configured with Flash Read Cache and a block size of 16KB, 256KB, 512KB, or 1024KB cannot be powered on. Flash Read Cache supports a minimum cache size of 4MB and maximum of 200GB, and a minimum block size of 4KB and maximum block size of 1MB. When you power on a virtual machine, the operation fails and the following messages appear:

    An error was received from the ESX host while powering on VM.

    Failed to start the virtual machine.

    Module DiskEarly power on failed.

    Failed to configure disk scsi0:0.

    The virtual machine cannot be powered on with an unconfigured disk. vFlash cache cannot be attached: msg.vflashcache.error.VFC_FAILURE

    Workaround: Configure virtual machine Flash Read Cache size and block size.

    1. Right-click the virtual machine and select Edit Settings.
    2. On the Virtual Hardware tab, expand Hard disk to view the disk options.
    3. Click Advanced next to the Virtual Flash Read Cache field.
    4. Increase the cache size reservation or decrease the block size.
    5. Click OK to save your changes.
  • A custom extension of a saved resource pool tree file cannot be loaded in the vSphere Web Client
    A DRS error message appears on host summary page.

    When you disable DRS in the vSphere Web Client, you are prompted to save the resource pool structure so that it can be reloaded in the future. The default extension of this file is .snapshot, but you can select a different extension for this file. If the file has a custom extension, it appears as disabled when you try to load it. This behavior is observed only on OS X.

    Workaround: Change the extension to .snapshot to load it in the vSphere Web Client on OS X.

  • DRS error message appears on the host summary page
    The following DRS error message appears on the host summary page:

    Unable to apply DRS resource settings on host. The operation is not allowed in the current state. This can significantly reduce the effectiveness of DRS.

    In some configurations a race condition might result in the creation of an error message in the log that is not meaningful or actionable. This error might occur if a virtual machine is unregistered at the same time that DRS resource settings are applied.

    Workaround: Ignore this error message.

  • Configuring virtual Flash Read Cache for VMDKs larger than 16TB results in an error
    Virtual Flash Read Cache does not support virtual machine disks larger than 16TB. Attempts to configure such disks will fail.

    Workaround: None

  • Virtual machines might power off when the cache size is reconfigured
    If you incorrectly reconfigure the virtual Flash Read Cache on a virtual machine, for example by assigning an invalid value, the virtual machine might power off.

    Workaround: Follow the recommended cache size guidelines in the vSphere Storage documentation.

  • Reconfiguring a virtual machine with virtual Flash Read Cache enabled might fail with the Operation timed out error
    Reconfiguration operations require a significant amount of I/O bandwidth. When you run a heavy load, such operations might time out before they finish. You might also see this behavior if the host has LUNs that are in an all paths down (APD) state.

    Workaround: Fix all host APD states and retry the operation with a smaller I/O load on the LUN and host.

  • DRS does not vMotion virtual machines with virtual Flash Read Cache for load balancing purpose
    DRS does not vMotion virtual machines with virtual Flash Read Cache for load balancing purposes.

    Workaround: DRS does not recommend these virtual machines for vMotion except for the following reasons:

    • To evacuate a host that the user has requested to enter maintenance or standby mode.
    • To fix DRS rule violations.
    • Host resource usage is in red state.
    • One or most hosts is over utilized and virtual machine demand is not being met.
      Note: You can optionally set DRS to ignore this reason.
  • Hosts are put in standby when the active memory of virtual machines is low but consumed memory is high
    ESXi 5.5 introduces a change in the default behavior of DPM designed to make the feature less aggressive, which can help prevent performance degradation for virtual machines when active memory is low but consumed memory is high. The DPM metric is X%*IdleConsumedMemory + active memory. The X% variable is adjustable and is set to 25% by default.

    Workaround: You can revert to the aggressive DPM behavior found in earlier releases of ESXi by setting PercentIdleMBInMemDemand=0 in the advanced options.

  • vMotion initiated by DRS might fail
    When DRS recommends vMotion for virtual machines with a virtual Flash Read Cache reservation, vMotion might fail because the memory (RAM) available on the target host is insufficient to manage the Flash Read Cache reservation of the virtual machines.

    Workaround: Follow the Flash Read Cache configuration recommendations documented in vSphere Storage.
    If vMotion fails, perform the following steps:

    1. Reconfigure the block sizes of the virtual machines on the target host and the incoming virtual machines to reduce the overall target usage of the VMkernel memory on the target host.
    2. Use vMotion to manually migrate the virtual machine to the target host to ensure the condition is resolved.
  • You are unable to view problems that occur during virtual flash configuration of individual SSD devices
    The configuration of virtual flash resources is a task that operates on a list of SSD devices. When the task finishes for all objects, the vSphere Web Client reports it as successful, and you might not be notified of problems with the configuration of individual SSD devices.

    Workaround: Perform one of the following tasks.

    • In the Recent Tasks panel, double-click the completed task.
      Any configuration failures appear in the Related events section of the Task Details dialog box.
    • Alternatively, follow these steps:
      1. Select the host in the inventory.
      2. Click the Monitor tab, and click Events.
  • Unable to obtain SMART information for Micron PCIe SSDs on the ESXi host
    Your attempts to use the esxcli storage core device smart get -d command to display statistics for the Micron PCIe SSD device fail. You get the following error message:
    Error getting Smart Parameters: CANNOT open device

    Workaround: None. In this release, the esxcli storage core device smart command does not support Micron PCIe SSDs.

  • ESXi does not apply the bandwidth limit that is configured for a SCSI virtual disk in the configuration file of a virtual machine
    You configure the bandwidth and throughput limits of a SCSI virtual disk by using a set of parameters in the virtual machine configuration file (.vmx). For example, the configuration file might contain the following limits for a scsi0:0 virtual disk:
    sched.scsi0:0.throughputCap = "80IOPS"
    sched.scsi0:0.bandwidthCap = "10MBps"
    sched.scsi0:0.shares = "normal"

    ESXi does not apply the sched.scsi0:0.bandwidthCap limit to the scsi0:0 virtual disk.

    Workaround: Revert to an earlier version of the disk I/O scheduler by using the vSphere Web Client or the esxcli system settings advanced set command.

    • In the vSphere Web Client, edit the Disk.SchedulerWithReservation parameter in the Advanced System Settings list for the host.
      1. Navigate to the host.
      2. On the Manage tab, select Settings and select Advanced System Settings.
      3. Locate the Disk.SchedulerWithReservation parameter, for example, by using the Filter or Find text boxes.
      4. Click Edit and set the parameter to 0.
      5. Click OK.
    • In the ESXi Shell to the host, run the following console command:
      esxcli system settings advanced set -o /Disk/SchedulerWithReservation -i=0
  • A virtual machine configured with Flash Read Cache cannot be migrated off a host if there is an error in the cache
    A virtual machine with Flash Read Cache configured might have a migration error if the cache is in an error state and is unusable. This error causes migration of the virtual machine to fail.

    Workaround:

    1. Reconfigure the virtual machine and disable the cache.
    2. Perform the migration.
    3. Re-enable the cache after the virtual machine is migrated.

    Alternatively, the virtual machine must be powered off and then powered on to correct the error with the cache.

  • You cannot delete the VFFS volume after a host is upgraded from ESXi 5.5 Beta
    You cannot delete the VFFS volume after a host is upgraded from ESXi 5.5 Beta.

    Workaround: This occurs only when you upgrade from ESXi 5.5 Beta to ESXi 5.5. To avoid this problem, install ESXi 5.5 instead of upgrading. If a you upgrade from ESXi 5.5 Beta, delete the VFFS volume before you upgrade.

  • Expected latency runtime improvements are not seen when virtual Flash Read Cache is enabled on virtual machines with older Windows and Linux guest operating systems
    Virtual Flash Read Cache provides optimal performance when the cache is sized to match the target working set, and when the guest file systems are aligned to at least a 4KB boundary. The Flash Read Cache filters out misaligned blocks to avoid caching partial blocks within the cache. This behavior is typically seen when virtual Flash Read Cache is configured for VMDKs of virtual machines with Windows XP and Linux distributions earlier than 2.6. In such cases, a low cache hit rate with a low cache occupancy is observed, which implies a waste of cache reservation for such VMDKs. This behavior is not seen with virtual machines running Windows 7, Windows 2008, and Linux 2.6 and later distributions, which align their file systems to a 4KB boundary to ensure optimal performance.

    Workaround: To improve the cache hit rate and optimal use of the cache reservation for each VMDK, ensure that the guest operating file system installed on the VMDK is aligned to at least a 4KB boundary.

Virtual SAN

  • VM directories contain duplicate swap (.vswp) files*
    This might occur if virtual machines running on Virtual SAN are not cleanly shutdown, and if you perform a fresh installation of ESXi and vCenter Server without erasing data from Virtual SAN disks. As a result, old swap files (.vswp) are found in the directories for virtual machines that are shut down uncleanly.

    Workaround: None

  • Attempts to add more than seven magnetic disks to a Virtual SAN disk group might fail with incorrect error message*
    Virtual SAN disk group supports maximum of one SSD and seven magnetic disks (HDD). Attempts to add an additional magnetic disk might fail with an incorrect error message similar to the following:

    The number of disks is not sufficient.

    Workaround: None
  • Re-scan failure experienced while adding a Virtual SAN disk*
    When you add a Virtual SAN disk, re-scan fails due to probe failure for a non-Virtual SAN volume, which causes the operation to fail.

    Workaround: Ignore the error as all the disks are registered correctly.
  • A hard disk drive (HDD) that is removed after its associated solid state drive (SSD) is removed might still be listed as a storage disk claimed by Virtual SAN*
    If an SSD and then its associated HDD is removed from a Virtual SAN datastore and you run the esxcli vsan storage list command, the removed HDD is still listed as a storage disk claimed by Virtual SAN. If the HDD is inserted back in a different host, the disk might appear to be part of two different hosts.

    Workaround: For example, if SSD and HDD is removed from ESXi x and inserted into ESXi y, perform the following steps to prevent the HDD from appearing to be a part of both ESXi x and ESXi y:
    1. Insert the SSD and HDD removed from the ESXi x, into ESXi y.
    2. Decommission the SSD from ESXi x.
    3. Run the command esxcfg-rescan -A.
       The HDD and SSD will no longer be listed on ESXi x.
  • The Working with Virtual SAN section of the vSphere Storage documentation indicates that the maximum number of HDD disks per a disk group is six. However, the maximum allowed number of HDDs is seven.*
  • After a failure in a Virtual SAN cluster, vSphere HA might report multiple events, some misleading, before restarting a virtual machine*
    The vSphere HA master agent makes multiple attempts to restart a virtual machine running on Virtual SAN after it has appeared to have failed. If the virtual machine cannot be immediately restarted, the master agent monitors the cluster state, and makes another attempt when conditions indicate that a restart might be successful. For virtual machines running on Virtual SAN, the vSphere HA master has special application logic to detect when the accessibility of a virtual machine's objects might have changed, and attempts a restart whenever an accessibility change is likely. The master agent makes an attempt after each possible accessibility change, and if it did not successfully power on the virtual machine before giving up and waiting for the next possible accessibility change.

    After each failed attempt, vSphere HA reports an event indicating that the failover was not successful, and after five failed attempts, reports that vSphere HA stopped trying to restart the virtual machine because the maximum number of failover attempts was reached. Even after reporting that the vSphere HA master agent has stopped trying, however, it does try the next time a possible accessibility change occurs.

    Workaround: None.

  • Powering off a Virtual SAN host causes the Storage Providers view in the vSphere Web Client to refresh longer than expected*
    If you power off a Virtual San host, the Storage Providers view might appear empty. The Refresh button continues to spin even though no information is shown.

    Workaround: Wait at least 15 minutes for the Storage Providers view to be populated again. The view also refreshes after you power on the host.

  • Virtual SAN reports a failed task as completed*
    Virtual SAN might report certain tasks as completed even though they failed internally.

    The following are conditions and corresponding reasons for errors:

    • Condition: Users attempt to create a new disk group or add a new disk to already existing disk group when the Virtual SAN license has expired.
      Error stack: A general system error occurred: Cannot add disk: VSAN is not licensed on this host.
    • Condition: Users attempt to create a disk group with the number of disk higher than the supported number. Or they try to add new disks to already existing disk group so that the total number exceeds the supported number of disks per disk group.
      Error stack: A general system error occurred: Too many disks.
    • Condition: Users attempt to add a disk to the disk group that has errors.
      Error stack: A general system error occurred: Unable to create partition table.

    Workaround: After identifying the reason for a failure, correct the reason and perform the task again.

  • Virtual SAN datastores cannot store host local and system swap files*
    Typically, you can place the system swap or host local swap file on a datastore. However, the Virtual SAN datastore does not support system swap and host local swap files. As a result, the UI option that allows you to select the Virtual SAN datastore as the file location for system swap or host local swap is not available.

    Workaround: In Virtual SAN environment, use other supported options to place the system swap and host local swap files.

  • A Virtual SAN virtual machine in a vSphere HA cluster is reported as vSphere HA protected although it has been powered off*
    This might happen when you power off a virtual machine with its home object residing on a Virtual SAN datastore, and the home object is not accessible. This problem is seen if a HA master agent election occurs after the object becomes inaccessible.

    Workaround:

    1. Make sure that the home object is accessible again by checking the compliance of the object with the specified storage policy.
    2. Power on the virtual machine then power it off again.

    The status should change to unprotected.

  • Virtual machine object remains in Out of Date status even after Reapply action is triggered and completed successfully*
    If you edit an existing virtual machine profile due to the new storage requirements, the associated virtual machine objects, home or disk, might go in Out of Date status.This occurs when your current environment cannot support reconfiguration of virtual machine objects. Using Reapply action does not change the status.

    Workaround: Add additional resources, hosts or disks, to the Virtual SAN cluster and invoke Reapply action again.

  • Automatic disk claiming for Virtual SAN does not work as expected if you license Virtual SAN after enabling it*
    If you enable Virtual SAN in automatic mode and then assign a license, Virtual SAN fails to claim disks.

    Workaround: Change the mode to Manual, and then switch back to Automatic. Virtual SAN will properly claim the disks.

  • vSphere High Availability (HA) fails to restart a virtual machine when Virtual SAN network is partitioned*
    This occurs when Virtual SAN uses VMkernel adapters for internode communication, which are on the same subnet as other VMkernel adapters in a cluster. Such configuration could cause network failure and disrupt Virtual SAN internode communication, while vSphere HA internode communication remains unaffected.

    In this situation, the HA master agent might detect the failure in a virtual machine, but is unable to restart it. For example, this could occur when the host on which the master agent is running does not have access to the virtual machine's objects.

    Workaround: Make sure that the VMkernel adapters used by Virtual SAN do not share a subnet with the VMkernel adapters used for other purposes.

  • VM directories contain duplicate swap (.vswp) files*   
    This might occur if virtual machines running on Virtual SAN are not cleanly shutdown, and if you perform a fresh installation of ESXi and vCenter Server without erasing data from Virtual SAN disks. As a result, old swap files (.vswp) are found in the directories for virtual machines that are shut down uncleanly.

    Workaround: None

  • VMs might become inaccessible due to high network latency*
    In a Virtual SAN cluster setup, if the network latency is high, some VMs might become inaccessible on vCenter Server and you will not be able to power on or access the VM.

    Workaround: Run the vsan.check_state -e -r RVC command.
  • VM operations might timeout due to high network latency*
    When storage controller with low queue depths are used, high network latency might cause VM operations to time out.

    Workaround: Re-attempt the operations when the network load is lower.
  • VMs might get renamed to a truncated version of their vmx file path*
    If the vmx file of a virtual machines is temporarily inaccessible, the VM gets renamed to a truncated version of the vmx file path. For example, the virtual machine might get renamed to /vmfs/volumes/vsan:52f1686bdcb477cd-8e97188e35b99d2e/236d5552-ad93. The truncation might delete half the UUID of the VM home directory making it difficult to map the renamed VM with the original VM, from just the VM name.

    Workaround: Run the vsan.fix_renamed_vms RVC command.
  • Flaky network might cause partition of the Virtual SAN cluster*
    If the Virtual SAN network is flaky due to issues related to DHCP or DNS, the virtual SAN cluster might become partitioned.
    The log files might contain entries similar to the following:
    var/run/log/dhclient.log:2013-11-08T15:59:07Z dhclient-uw[33625]: ipv4: Unbinding interface
    var/run/log/dhclient.log:2013-11-08T15:59:23Z dhclient-uw[33635]: ipv4: Unbinding interface


    Workaround: Use static IPs.

vCenter Server and vSphere Web Client

  • Unable to add ESXi host to Active Directory domain*
    You might observe that Active Directory domain name is not displayed in Domain drop-down list under Select Users and Groups option when you attempt to assign permissions. Also, the Authentication Services Settings option might not display any trusted domain controller even when the active directory has trusted domains.

    Workaround:
    1. Restart netlogond, lwiod, and then lsassd daemons.
    2. Login to ESXi host using vSphere Client.
    3. In the Configuration tab and click Authentication Services Settings.
    4. Refresh to view the trusted domains.
Virtual Machine Management Issues
  • Virtual machines with Windows 7 Enterprise 64-bit guest operating systems in the French locale experience problems during clone operations
    If you have a cloned Windows 7 Enterprise 64-bit virtual machine that is running in the French locale, the virtual machine disconnects from the network and the customization specification is not applied. This issue appears when the virtual machine is running on an ESXi 5.1 host and you clone it to ESXi 5.5 and upgrade the VMware Tools version to the latest version available with the 5.5 host.

    Workaround: Upgrade the virtual machine compatibility to ESXi 5.5 and later before you upgrade to the latest available version of VMware Tools.

  • Attempts to increase the size of a virtual disk on a running virtual machine fail with an error
    If you increase the size of a virtual disk when the virtual machine is running, the operation might fail with the following error:

    This operation is not supported for this device type.

    The failure might occur if you are extending the disk to the size of 2TB or larger. The hot-extend operation supports increasing the disk size to only 2TB or less. SATA virtual disks do not support the hot-extend operation no matter what their size is.

    Workaround: Power off the virtual machine to extend the virtual disk to 2TB or larger.

VMware HA and Fault Tolerance Issues
  • If you select an ESX/ESXi 4.0 or 4.1 host in a vSphere HA cluster to fail over a virtual machine, the virtual machine might not restart as expected
    When vSphere HA restarts a virtual machine on an ESX/ESXi 4.0 or 4.1 host that is different from the original host the virtual machine was running on, a query is issued that is not answered. The virtual machine is not powered on on the new host until you answer the query manually from the vSphere Client.

    Workaround: Answer the query from the vSphere Client. Alternatively, you can wait for a timeout (15 minutes by default), and vSphere HA attempts to restart the virtual machine on a different host. If the host is running ESX/ESXi 5.0 or later, the virtual machine is restarted.

  • If a vMotion operation without shared storage fails in a vSphere HA cluster, the destination virtual machine might be registered to an unexpected host
    A vMotion migration involving no shared storage might fail because the destination virtual machine does not receive a handshake message that coordinates the transfer of control between the two virtual machines. The vMotion protocol powers off both the source and destination virtual machines. If the source and destination hosts are in the same cluster and if vSphere HA has been enabled, the destination virtual machine might be registered by vSphere HA on another host than the one chosen as the target for the vMotion migration.

    Workaround: If you want to retain the destination virtual machine and you want it to be registered to a specific host, relocate the destination virtual machine to the destination host. This relocation is best done before powering on the virtual machine.

Supported Hardware Issues
  • Sensor values for Fan, Power Supply, Voltage, and Current sensors appear under the Other group of the vCenter Server Hardware Status Tab
    Some sensor values are listed in the Other group instead of the respective categorized group.

    Workaround: None.

  • I/O memory management unit (IOMMU) faults might appear when the debug direct memory access (DMA) mapper is enabled
    The debug mapper places devices in IOMMU domains to help catch device memory accesses to addresses that have not been explicitly mapped. On some HP systems with old firmware, IOMMU faults might appear.

    Workaround: Download firmware upgrades from the HP Web site and apply them.

    • Upgrade the firmware of the HP iLO2 controller.
      Version 2.07, released in August 2011, resolves the problem.
    • Upgrade the firmware of the HP Smart Array.
      For the HP Smart Array P410, version 5.14, released in January 2012, resolves the problem.

VMware Tools Issues

  • File disappears after VMware Tools upgrade*
    deployPkg.dll file which is present in C:\Program Files\Vmware\Vmware Tools\ is not found after upgrading VMware Tools. This is observed when it is upgraded from version 5.1 Update 2 to 5.5 Update 1 and version 5.5 to 5.5 Update 1

    Workaround: None
  • User is forcefully logged out while installing or uninstalling VMware Tools by OSP*
    While installing or uninstalling VMware Tools packages in a RHEL (Red Hat Linux Enterprise) and CentOS virtual machines that were installed using operating system specific packages (OSP), the current user is forcefully logged out. This issue occurs in RHEL 6.5 64-bit, RHEL 6.5 32-bit, CentOS 6.5 64-bit and CentOS 6.5 32-bit virtual machines.

    Workaround:
    • Use secure shell (SSH) to install or uninstall VMware Tools
      or
    • The user must log in again to install or uninstall the VMware Tools packages