VMware ESXi 5.1 Update 2 Release Notes
VMware ESXi 5.1 Update 2 | 16 JAN 2014 | Build 1483097
Last updated: 23 JUL 2014
Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:
What's New
This release of VMware ESXi contains the following enhancements:
- Support for additional guest operating systems –This release updates support for many guest operating systems.
For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.
- Resolved Issues – This release also delivers a number of bug fixes that have been documented in the Resolved Issues section.
Earlier Releases of ESXi 5.1
Features and known issues of ESXi 5.1 are described in the release notes for each release. To view release notes for earlier releases of ESXi 5.1, see the
Internationalization
VMware vSphere 5.1 Update 2 is available in the following languages:
- English
- French
- German
- Japanese
- Korean
- Simplified Chinese
Compatibility and Installation
Upgrading vSphere Client
After you upgrade vCenter Server or the ESXi host to vSphere 5.1 Update 2 and attempt to connect to the vCenter Server or the ESXi host using a version of vSphere Client earlier than 5.1 Update 1b, you are prompted to upgrade the vSphere Client to vSphere Client 5.1 Update 2. The vSphere Client upgrade is mandatory. You must use only the upgraded vSphere Client to access vSphere 5.1 Update 2.
ESXi, vCenter Server, and vSphere Web Client Version Compatibility
The VMware
Product Interoperability Matrix provides details about the compatibility of current and previous
versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client,
and optional VMware products. In addition, check this site for information about supported management
and backup agents before installing ESXi or vCenter Server.
The vSphere Client and the vSphere Web Client are packaged with the vCenter Server and modules
ZIP file. You can install one or both clients from the VMware vCenter™ Installer wizard.
ESXi, vCenter Server, and VDDK Compatibility
Virtual Disk Development Kit (VDDK) 5.1.2 adds support for ESXi 5.1 Update 2 and vCenter Server 5.1 Update 2 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.
Hardware Compatibility for ESXi
To determine which processors, storage devices, SAN arrays, and I/O devices are compatible with
vSphere 5.1 Update 2, use the ESXi 5.1 Update 2 information in the
VMware Compatibility
Guide.
The list of supported processors is expanded for this release. To determine which processors
are compatible with this release, use the VMware Compatibility
Guide.
Guest Operating System Compatibility for ESXi
To determine which guest operating systems are compatible with ESXi 5.1 Update 2 , use the
ESXi 5.1 Update 2 information in the VMware Compatibility Guide.
Beginning with vSphere 5.1, support level changes for older guest operating systems have
been introduced. For descriptions of each support level, see
Knowledge Base article 2015161. The
VMware Compatibility Guide
provides detailed support information for all operating system releases and VMware product releases.
The following guest operating system releases that are no longer supported by their respective operating
system vendors are deprecated. Future vSphere releases will not support these guest operating systems,
although vSphere 5.1 Update 2 does support them.
- Windows NT
- All 16-bit Windows and DOS releases (Windows 98, Windows 95, Windows 3.1)
- Debian 4.0 and 5.0
- Red Hat Enterprise Linux 2.1
- SUSE Linux Enterprise 8
- SUSE Linux Enterprise 9 prior to SP4
- SUSE Linux Enterprise 10 prior to SP3
- SUSE Linux Enterprise 11 Prior to SP1
- Ubuntu releases 8.04, 8.10, 9.04, 9.10 and 10.10
- All releases of Novell Netware
- All releases of IBM OS/2
Virtual Machine Compatibility for ESXi
Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with
ESXi 5.1 Update 2 . Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are no longer
supported. To use such virtual machines on ESXi 5.1 Update 2 , upgrade the virtual machine compatibility. See the
vSphere Upgrade documentation.
Installation Notes for This Release
Read the vSphere Installation and Setup documentation for step-by-step guidance on installing and
configuring ESXi and vCenter Server.
Although the installations are straightforward, several subsequent configuration steps are essential.
In particular, read the following:
Migrating Third-Party Solutions
You cannot directly migrate third-party solutions installed on an ESX or ESXi host as part of a host upgrade. Architectural changes between ESXi 5.0 and ESXi 5.1 result in the loss of third-party components and possible system instability. To accomplish such migrations, you can create
a custom ISO file with Image Builder. For information about upgrading with third-party
customizations, see the vSphere Upgrade documentation. For information about using Image
Builder to make a custom ISO, see the vSphere Installation and Setup documentation.
Upgrades and Installations Disallowed for Unsupported CPUs
vSphere 5.1 Update 2 supports only CPUs with LAHF and SAHF CPU instruction sets. During an installation or upgrade,
the installer checks the compatibility of the host CPU with vSphere 5.1 Update 2. If your host hardware is not compatible,
a purple screen appears with an incompatibility information message, and you cannot install or upgrade to vSphere 5.1 Update 2.
Upgrades for This Release
For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.
ESXi 5.1 Update 2 offers the following tools for upgrading ESX/ESXi hosts:
- Upgrade interactively using an ESXi installer ISO image on CD-ROM, DVD, or USB flash drive. You can run the ESXi 5.1 Update 2 installer from a CD-ROM, DVD, or USB flash drive to do an interactive upgrade. This method is appropriate for a small number of hosts.
-
Perform a scripted upgrade. You can upgrade or migrate from version 4.x ESX/ESXi hosts, ESXi 5.0.x, and ESXi 5.1.x hosts to ESXi 5.1 Update 2 by invoking an update script, which provides an efficient, unattended upgrade. Scripted upgrades also provide an efficient way to deploy multiple hosts. You can use a script to upgrade ESXi from a CD-ROM or DVD drive, or by PXE-booting the installer.
-
vSphere Auto Deploy. If your ESXi 5.x host was deployed using vSphere Auto Deploy, you can use Auto Deploy to reprovision the host by rebooting it with a new image profile that contains the ESXi upgrade.
-
esxcli. You can update and apply patches to ESXi 5.1.x hosts by using the esxcli command-line utility,this can be done either from a download depot on vmware.com or from a downloaded ZIP file of a depot that is prepared by a VMware partner. You cannot use esxcli to upgrade ESX or ESXI hosts to version 5.1.x from ESX/ESXI versions earlier than version 5.1.
Supported Upgrade Paths for Upgrade to ESXi 5.1 Update 2 :
Upgrade Deliverables |
Supported Upgrade Tools |
Supported Upgrade Paths to ESXi 5.1 Update 2 |
ESX/ESXi 4.0:
Includes
ESX/ESXi 4.0 Update 1
ESX/ESXi4.0 Update 2
ESX/ESXi4.0 Update 3
ESX/ESXi 4.0 Update 4 |
ESX/ESXi 4.1:
Includes
ESX/ESXi 4.1 Update 1
ESX/ESXi 4.1 Update 2
ESX/ESXi 4.1 Update 3
|
ESXi 5.0:
Includes
ESXi 5.0 Update 1
ESXi 5.0 Update 2
ESXi 5.0 Update 3
|
ESXi 5.1
Includes
ESXi 5.1 Update 1
|
VMware-VMvisor-Installer-5.1.0.update02-1483097.x86_64.iso
|
- VMware vCenter Update Manager
- CD Upgrade
- Scripted Upgrade
|
Yes
|
Yes |
Yes |
Yes |
update-from-esxi5.1-5.1_update02.zip |
- VMware vCenter Update Manager
- ESXCLI
- VMware vSphere CLI
|
No |
No |
Yes* |
Yes |
Using patch definitions downloaded from VMware portal (online) |
VMware vCenter Update Manager with patch baseline |
No |
No |
No |
Yes |
*Note: Upgrade from ESXi 5.0.x to ESXi 5.1 Update 1 using update-from-esxi5.1-5.1_update02.zip is supported only with ESXCLI. You need to run the esxcli software profile update --depot=<depot_location> --profile=<profile_name> command to perform the upgrade. For more information, see the ESXi 5.1.x Upgrade Options topic in the vSphere Upgrade guide.
Open Source Components for VMware vSphere 5.1 Update 2
The copyright statements and licenses applicable to the open source software components distributed in
vSphere 5.1 Update 2 are available at http://www.vmware.com/download/open_source.html. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code
to be made available for the most recent generally available release of vSphere.
Product Support Notices
-
vSphere Client. In vSphere 5.1, all new vSphere features are available only through the vSphere Web
Client. The traditional vSphere Client will continue to operate, supporting the same feature set as vSphere 5.0,
but not exposing any of the new features in vSphere 5.1 .
vSphere 5.1 and its subsequent update and patch releases
are the last releases to include the traditional vSphere Client. Future major releases of VMware vSphere will
include only the vSphere Web Client.
For vSphere 5.1, bug fixes for the traditional vSphere Client are limited to
security or critical issues. Critical bugs are deviations from specified product functionality that cause data
corruption, data loss, system crash, or significant customer application down time where no workaround is available
that can be implemented.
VMware Toolbox. vSphere 5.1 is the last release to include support for the VMware Tools graphical user interface, VMware Toolbox. VMware will continue to update and support the Toolbox command-line interface (CLI) to perform all VMware Tools functions.
VMI Paravirtualization. vSphere 4.1 was the last release to support the VMI guest operating system paravirtualization interface. For information about migrating virtual machines that are enabled for VMI so that they can run on later vSphere releases, see Knowledge Base article 1013842.
Windows Guest Operating System Customization. vSphere 5.1 is the last release to support customization for Windows 2000 guest operating systems. VMware will continue to support customization for newer versions of Windows guests.
- VMCI Sockets. Guest-to-guest communications (virtual machine to virtual machine) are deprecated in the vSphere 5.1 release.
This functionality will be removed in the next major release. VMware will continue support for
host to guest communications.
Patches Contained in this Release
This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.
Patch Release ESXi510-Update02 contains the following individual bulletins:
ESXi510-201401201-UG: Updates ESXi 5.1 esx-base vib
ESXi510-201401202-UG: Updates ESXi 5.1 tools-light vib
ESXi510-201401203-UG: Updates ESXi 5.1 net-tg3 vib
ESXi510-201401204-UG: Updates ESXi 5.1 net-e1000e vib
ESXi510-201401205-UG: Updates ESXi 5.1 scsi-rste vib
ESXi510-201401206-UG: Updates ESXi 5.1 scsi-mpt2sas vib
ESXi510-201401207-UG: Updates ESXi 5.1 sata-ata-piix vib
ESXi510-201401208-UG: Updates ESXi 5.1 sata-ahci vib
Patch Release ESXi510-Update02 Security-only contains the following individual bulletins:
ESXi510-201401101-SG: Updates ESXi 5.1 esx-base vib
ESXi510-201401102-SG: Updates ESXi 5.1 tools-light vib
ESXi510-201401103-SG: Updates ESXi 5.1 esx-xlibs vib
Patch Release ESXi510-Update02 contains the following image profiles:
ESXi-5.1.0-20140102001-standard
ESXi-5.1.0-20140102001-no-tools
Patch Release ESXi510-Update02 Security-only contains the following image profiles:
ESXi-5.1.0-20140101001s-standard
ESXi-5.1.0-20140101001s-no-tools
For information on patch and update classification, see KB 2014447.
Resolved Issues
This section describes resolved issues in this release:
CIM and API
-
ESXi host is disconnected from vCenter Server due to sfcbd exhausting inodes
ESXi hosts disconnect from vCenter Server and cannot be reconnected to the vCenter Server. This issue is caused by the hardware monitoring service (sfcdb) that populates the /var/run/sfcb directory with over 5000 files.
The hostd.log file located at /var/log/ indicates that the host is out of space:
VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
The vmkernel.log file located at /var/log indicates that the host is out of inodes:
cpu4:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
cpu5:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
This issue is resolved in this release.
-
Incorrect error messages might be displayed by CIM providers
Incorrect error messages similar to the following might be displayed by CIM providers:
\"Request Header Id (886262) != Response Header reqId (0) in request to provider 429 in process 5. Drop response.\"
This issue is resolved in this release by updating the error log and restarting the sfcbd management agent to display the correct error messages similar to the following:
Header Id (373) Request to provider 1 in process 0 failed. Error:Timeout (or other socket error) waiting for response from provider.
-
The hardware monitoring service stops and the Hardware Status tab only displays an error message
The Hardware Status tab fails to display health statuses and displays an error message similar to the following:
Hardware monitoring service on this host is not responding or not available.
The hardware monitoring service (sfcdb) stops and the syslog file contains entries similar to the following:
sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 6750210)
sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 6 payLoadSize 19 chunkSize 0 from 12 resp 6750210
sfcb-smx[xxxxxx]: spRecvReq returned error -1. Skipping message.
sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 4)
sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 220 payLoadSize 116 chunkSize 104 from 12 resp 4
...
...
sfcb-vmware_int[xxxxxx]: spGetMsg receiving from 40 419746-11 Resource temporarily unavailable
sfcb-vmware_int[xxxxxx]: rcvMsg receiving from 40 419746-11 Resource temporarily unavailable
sfcb-vmware_int[xxxxxx]: Timeout or other socket error
This issue is resolved in this release.
-
WS-Management GetInstance () action might issue a wsa:DestinationUnreachable fault on an ESXi server
WS-Management GetInstance () action against http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_SoftwareIdentity?InstanceID=46.10000 might issue a wsa:DestinationUnreachable fault on some ESXi server. The OMC_MCFirmwareIdentity object path is not consistent for CIM gi/ei/ein operations on the system with Intelligent Platform Management Interface (IPMI) Baseboard Management Controller (BMC) sensor. As a result, WS-Management GetInstance() action issues a wsa:DestinationUnreachable fault on the ESXi server.
This issue is resolved in this release.
-
vmklinux_9:ipmi_thread of vmkapimod displays CPU usage as hundred percent for one hour
In an ESXi host, when reading the Field Replaceable Unit (FRU) inventory data using the Intelligent Platform Management Interface (IPMI) tool, the vmklinux_9:ipmi_thread of vmkapimod displays the CPU usage as hundred percent. This is because the IPMI tool uses the Read FRU Data command multiple times to read the inventory data.
This issue is resolved in this release.
-
Cannot monitor ESXi host's hardware status
When the SFCBD service is enabled in the trace mode and the service stops running, the Hardware Status tab for an ESXi host might report an error. Any third-party tool might not be able to monitor the ESXi host's hardware status.
This issue is resolved in this release.
-
Unable to clear Intelligent Platform Management Interface (IPMI) System Event Log (SEL) on the ESXi host
The host IPMI System Event Log is not getting cleared in a cluster environment.
This issue is resolved by adding a new CLI support to clear the IPMI SEL.
-
LSI CIM provider leaks file descriptors
LSI CIM provider (one of sfcb process) leaks file descriptors. This might cause sfcb-hhrc to stop and sfcbd to restart. The syslog file might log messages similar to the following:
sfcb-LSIESG_SMIS13_HHR[ ]: Error opening socket pair for getProviderContext: Too many open files
sfcb-LSIESG_SMIS13_HHR[ ]: Failed to set recv timeout (30) for socket -1. Errno = 9
...
...
sfcb-hhrc[ ]: Timeout or other socket error
sfcb-hhrc[ ]: TIMEOUT DOING SHARED SOCKET RECV RESULT ( )
This issue is resolved in this release.
-
Unable to disable weak ciphers on CIM port 5989
To disable cipher block chaining (CBC) algorithms for Payment Card Industry (PCI)compliance, you might need to disable weak ciphers on CIM port 5989. This is not permitted.
This issue is resolved in this release. You can update the configuration in sfcb.cfg to disable weak ciphers by using the following commands:
# vi /etc/sfcb/sfcb.cfg
sslCipherList: HIGH:!DES-CBC3-SHA
# /etc/init.d/sfcbd-watchdog restart
-
sfcb-CIMXML-Pro freezes and causes a core dump of sfcb-CIMXML-Pro process
The sfcb-CIMXML-Pro freezes and causes a core dump, when the sfcb sub process for http processing incorrectly calls atexit and exit following a fork command.
This issue is resolved in this release.
Miscellaneous
-
Some 3D applications display misplaced geometry when the 3D support is enabled on a Windows 7 or Windows 8 virtual machines
Some 3D applications display misplaced geometry on a Windows 7 or Windows 8 virtual machine if the Enable 3D support option is selected.
This issue is resolved in this release.
-
Netlogond might stop responding and cause ESXi host to lose Active Directory functionality
Netlogond might consume high memory in Active Directory environment with multiple unreachable Domain Controllers. As a result, Netlogond might fail and ESXi host might lose Active Directory
functionality.
This issue is resolved in this release.
-
The snmpwalk command for a specific snmp OID fails
When you run the snmpwalk command for ifHCOutOctets 1.3.6.1.2.1.31.1.1.1.10, an error message similar to the following is displayed:
No Such Instance currently exists at this OID for ifHCOutOctets 1.3.6.1.2.1.31.1.1.1.10
This issue is resolved in this release.
-
vSocket driver running on an ESXi host with vShield Endpoint might get into a deadlock
The vSocket driver might get into a deadlock in a Windows virtual machine running on an ESXi host with vShield Endpoint. A blue diagnostic screen with the following log is displayed:
esx-<host_name>-2013-07-01--09.54/vmfs/volumes/50477c5b-d5ad9a53-8e9c-047d7ba55d90/B15E030S/vmware.log:
2013-06-27T20:44:05.812Z| vcpu-0| TOOLS installed legacy version 8384, available legacy version 8389
2013-06-27T20:44:05.813Z| vcpu-0| Guest: toolbox: Version: build-515842
...
2013-06-28T20:38:16.923Z| vcpu-0| Ethernet0 MAC Address: 00:50:56:bf:73:51
2013-06-28T20:38:16.927Z| vcpu-1| Ethernet2 MAC Address: 00:50:56:bf:73:53
2013-06-28T20:39:16.930Z| vcpu-0| Guest: vfile: vf-AUDIT: VFileAuditSvmConnectivity : Lost connectivity to SVM, irpStatus: 0xc0000001 ## disconnect SecureVM
2013-06-28T20:41:06.185Z| vmx| GuestRpcSendTimedOut: message to toolbox timed out.
2013-06-28T20:41:06.186Z| vmx| Vix: [4304191 guestCommands.c:2194]: Error VIX_E_TOOLS_NOT_RUNNING in VMAutomationTranslateGuestRpcError(): VMware Tools are not running in the guest
2013-06-28T20:48:28.341Z| mks| MKS disabling SVGA
2013-06-28T20:48:34.346Z| mks| WinBSOD: (30) `Dumping physical memory to disk: 30 '
2013-06-28T20:48:43.349Z| mks| WinBSOD: (30) `Dumping physical memory to disk: 35 '
This issue is resolved in this release.
-
Virtual machine becomes unresponsive due to a signal 11 error while executing SVGA code
Virtual machine becomes unresponsive due to a signal 11 error, while executing SVGA code in svga2_map_surface.
This issue is resolved in this release.
-
When ESXi host reports a disk full message the image utility function fails to clear the temp files
After the ESXi host encounters a storage issue, such as disk full. The image utility function might fail to clear the tempFileName files from the vCloud Director.
This issue is resolved in this release.
-
Hostd might stop responding and generates hostd-worker dump
The ESXi 5.1 host might generate hostd-worker dump after you detach the software iSCSI disk, which is connected with vmknic on vDS. This issue occurs when you attempt to retrieve the latest information on the host.
This issue is resolved in this release.
-
vSphere replication might fail after an upgrade from ESXi 5.1.x to 5.5 when vSphere Replication traffic checkbox is selected under Networking option using vSphere Web Client on ESXi 5.1.x
You might be unable to replicate a virtual machine after you upgrade to ESXi 5.5 from ESXi 5.1.x. This issue occurs when vSphere Replication traffic checkbox is selected under Networking option using vSphere Web Client on ESXi 5.1.x.
Error messages similar to the following are written to vmkernel.log file:
2013-10-16T15:42:48.169Z cpu5:69024)WARNING: Hbr: 549: Connection failed to 10.1.253.51 (groupID=GID-46415ddc-2aef-4e9f-a173-49cc80854682): Timeout
2013-10-16T15:42:48.169Z cpu5:69024)WARNING: Hbr: 4521: Failed to establish connection to [10.1.253.51]:31031(groupID=GID-46415ddc-2aef-4e9f-a173-49cc80854682): Timeout
When you select virtual machine Summary tab, error messages similar to the following might be displayed:
No Connection to VR server: Not responding
Attempts to start synchronization might also fail with error message similar to the following:
An ongoing synchronization task already exists
The Tasks & Events tab might display the state of the virtual machine as Invalid.
This issue is resolved in this release.
-
ESXi host might lose connection from vCenter Server when Storage vMotion is performed on a virtual machine undergoing replication
When a virtual machine is replicated and you perform storage vMotion on the virtual machine, the hostd service might stop responding. As a result, ESXi host might lose connection from vCenter Server.
The error messages similar to the following might be written to the hostd log files:
2013-08-28T14:23:10.985Z [FFDF8D20 info 'TagExtractor'] 9: Rule type=[N5Hostd6Common31MethodNameBasedTagExtractorRuleE:0x4adddd0], id=rule[VMThrottledOpRule], tag=IsVMThrottledOp, regex=vim\.VirtualMachine\.(reconfigure|removeAllShapshots)|vim\.ManagedEntity\.destroy|vim\.Folder\.createVm|vim\.host\.LowLevelProvisioningManager\.(createVM|reconfigVM|consolidateDisks)|vim\.vm\.Snapshot\.remove - Identifies Virtual Machine operations that need additional throttling2013-08-28T14:23:10.988Z [FFDF8D20 info 'Default'] hostd-9055-1021289.txt time=2013-08-28 14:12:05.000--> Crash Report build=1021289--> Non-signal terminationBacktrace:--> Backtrace[0] 0x6897fb78 eip 0x1a231b70--> Backtrace[1] 0x6897fbb8 eip 0x1a2320f9
This issue occurs if there is a configuration issue during virtual machine replication.
This issue is resolved in this release.
-
Unable to create virtual machines on remotely mounted vSphere Storage Appliance datastore
An ESXi host creating virtual machines on a remotely mounted vSphere Storage Appliance (VSA) datastore might stop responding. This is due to the VSA Manager plugin not handling network errors correctly.
This issue occurs due to communication errors and the underlying function returning NULL pointer to the VSA Manager plugin.
This issue is resolved in this release.
-
During target resets, lsilogic of a virtual machine waits for commands from all targets
When the lsilogic virtual adapter performs a target reset, it waits for commands in flight for all targets on the adapter. This causes the target reset to block the virtual machine for more time than required.
This issue is resolved in this release.
Networking
-
Multiple ESXi 5.1 hosts might stop responding intermittently when NetFlow monitoring is enabled on a distributed port group
If NetFlow monitoring is enabled on a distributed port group, the Internet Protocol Flow Information Export (IPFIX) capability is used to monitor switch traffic. A race condition might occur in the IPFIX filter function when records with the same key co-exist in the hashtable, causing ESXi hosts to stop responding with a backtrace similar to the following:
cpu8:42450)Panic: 766: Panic from another CPU (cpu 8, world 42450): ip=0x41801d47b266:
#GP Exception 13 in world 42450:vmm0:Datacen @ 0x41801d4b7aaf
cpu8:42450)Backtrace for current CPU #8, worldID=42450, ebp=0x41221749b390
cpu8:42450)0x41221749b390:[0x41801d4b7aaf]vmk_SPLock@vmkernel#nover+0x12 stack: 0x41221749b3f0, 0xe00000000000
cpu8:42450)0x41221749b4a0:[0x41801db01409]IpfixFilterPacket@
#
+0x7e8 stack: 0x4100102205c0, 0x1, 0x
cpu8:42450)0x41221749b4e0:[0x41801db01f36]IpfixFilter@
#
+0x41 stack: 0x41801db01458, 0x4100101a2e58
This issue is resolved in this release.
-
The tcpdump-uw utility can only capture packets smaller than 8138 bytes
The tcpdump-uw frame capture utility bundled with ESXi can only capture packets smaller than 8138 bytes. This issue occurs due to a socket buffer size being set to 8KB in VMware's implementation of tcpdump-uw. The socket buffer is set to 8192 bytes and approximately 54 bytes is needed for control data, making 8138 bytes the maximum that can be captured.
The default buffer size is increased to 64KB to resolve the issue.
-
Management traffic configured on non Vmk0 network interface defaults to Vmk0 after upgrading from ESXi 5.0 to ESXi 5.1
If a non Vmk0 vmknic is used for management traffic on ESXi 5.0 and it is upgraded to ESXi 5.1, the Management Traffic defaults to Vmk0. This is incorrect.
This issue is resolved in this release.
-
Network packets drop is incorrectly reported in esxtop between two virtual machines on the same ESXi host and vSwitch
When two virtual machines are configured with e1000 driver on the same vSwitch on a host, the network traffic between the two virtual machines might report significant packet drop in esxtop. This happens because during reporting there is no accounting for split packets when you enable TSO from guest.
This issue is resolved in this release.
-
Linux commands, ip link or ip addr, might display the link state for VMXNET3 adapters as Unknown instead of Up
When you create VMXNET3 adapters on the guest operating system, the Linux command, ip link or ip addr might display the link state as Unknown instead of Up.
This issue is resolved in this release.
-
Virtual machines with Solaris 10 guest operating system and VMXNET3 drivers display overfragmented log messages on the console
When you install VMware Tools on Solaris 10 virtual machine and create a VMXNET3 device on the guest operating system, log messages similar to the following are displayed on the virtual machine console:
Apr 3 22:44:54 daxa020z last message repeated 274 times
Apr 3 22:45:00 daxa020z vmxnet3s: [ID 450982 kern.notice] vmxnet3s:0: overfragmented mp (16)
Apr 3 22:51:35 daxa020z last message repeated 399 times
Apr 3 22:51:40 daxa020z vmxnet3s: [ID 450982 kern.notice] vmxnet3s:0: overfragmented mp (16)
This issue is resolved in this release.
-
Obtaining the permanent MAC address for a VMXNET3 NIC might fail
When you use the ETHTOOL_GPERMADDR ioctl to obtain the permanent MAC address for a VMXNET3 NIC, if the Linux kernel version is between 2.6.13 and 2.6.23, no results are obtained. If the Linux kernel version is above 2.6.23, the MAC address returned contains all zeros.
This issue is resolved in this release.
-
ESXi 5.x host with virtual machines using an E1000 or E1000e virtual adapter fails with a purple diagnostic screen
ESXi host experiences a purple diagnostic screen with errors for E1000PollRxRing and E1000DevRx when the rxRing buffer fills up and the max Rx ring is set to more than 2. The next Rx packet received that is handled by the second ring is NULL, causing a processing error. The purple diagnostic screen or backtrace contains entries similar to:
@BlueScreen: #PF Exception 14 in world 63406:vmast.63405 IP 0x41801cd9c266 addr 0x0
PTEs:0x8442d5027;0x383f35027;0x0;
Code start: 0x41801cc00000 VMK uptime: 1:08:27:56.829
0x41229eb9b590:[0x41801cd9c266]E1000PollRxRing@vmkernel#nover+0xdb9 stack: 0x410015264580
0x41229eb9b600:[0x41801cd9fc73]E1000DevRx@vmkernel#nover+0x18a stack: 0x41229eb9b630
0x41229eb9b6a0:[0x41801cd3ced0]IOChain_Resume@vmkernel#nover+0x247 stack: 0x41229eb9b6e0
0x41229eb9b6f0:[0x41801cd2c0e4]PortOutput@vmkernel#nover+0xe3 stack: 0x410012375940
0x41229eb9b750:[0x41801d1e476f]EtherswitchForwardLeafPortsQuick@
#
+0xd6 stack: 0x31200f9
0x41229eb9b950:[0x41801d1e5fd8]EtherswitchPortDispatch@
#
+0x13bb stack: 0x412200000015
0x41229eb9b9c0:[0x41801cd2b2c7]Port_InputResume@vmkernel#nover+0x146 stack: 0x412445c34cc0
0x41229eb9ba10:[0x41801cd2ca42]Port_Input_Committed@vmkernel#nover+0x29 stack: 0x41001203aa01
0x41229eb9ba70:[0x41801cd99a05]E1000DevAsyncTx@vmkernel#nover+0x190 stack: 0x41229eb9bab0
0x41229eb9bae0:[0x41801cd51813]NetWorldletPerVMCB@vmkernel#nover+0xae stack: 0x2
0x41229eb9bc60:[0x41801cd0b21b]WorldletProcessQueue@vmkernel#nover+0x486 stack: 0x41229eb9bd10
0x41229eb9bca0:[0x41801cd0b895]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x10041229eb9bd20
0x41229eb9bd20:[0x41801cc2083a]BH_Check@vmkernel#nover+0x185 stack: 0x41229eb9be20
0x41229eb9be20:[0x41801cdbc9bc]CpuSchedIdleLoopInt@vmkernel#nover+0x13b stack: 0x29eb9bfa0
0x41229eb9bf10:[0x41801cdc4c1f]CpuSchedDispatch@vmkernel#nover+0xabe stack: 0x0
0x41229eb9bf80:[0x41801cdc5f4f]CpuSchedWait@vmkernel#nover+0x242 stack: 0x412200000000
0x41229eb9bfa0:[0x41801cdc659e]CpuSched_Wait@vmkernel#nover+0x1d stack: 0x41229eb9bff0
0x41229eb9bff0:[0x41801ccb1a3a]VmAssistantProcessTask@vmkernel#nover+0x445 stack: 0x0
0x41229eb9bff8:[0x0]
stack: 0x0
This issue is resolved in this release.
-
Virtual machines with e1000 NIC might fail when placed in suspended mode
Virtual machine might fail and display error messages in the vmware.log file similar to the following when
the guest operating system with e1000 NIC driver is placed in suspended mode.
2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120: Msg_Post: Error
2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120+ Unexpected signal: 11.
This issue occurs for the virtual machine that uses ip aliasing and the number of IP addresses exceed 10.
This issue is resolved in this release.
-
IP address displayed on DCUI changes on reset when the management traffic is enabled on multiple VMkernel ports
Whenever you reset a management network where the management traffic is enabled on multiple VMkernel ports, the IP address displayed on the Direct Console User Interface (DCUI) changes.
This issue is resolved in this release.
-
ESXi host displays a purple diagnostic screen with a PF Exception 14 error
ESXi hosts with DvFilter module might display a purple diagnostic screen. A backtrace similar to the following is displayed:
2013-07-18T06:41:39.699Z cpu12:10669)0x412266b5bbe8:[0x41800d50b532]DVFilterDispatchMessage@com.vmware.vmkapi#v2_1_0_0+0x92d stack: 0x10
2013-07-18T06:41:39.700Z cpu12:10669)0x412266b5bc68:[0x41800d505521]DVFilterCommBHDispatch@com.vmware.vmkapi#v2_1_0_0+0x394 stack: 0x100
2013-07-18T06:41:39.700Z cpu12:10669)0x412266b5bce8:[0x41800cc2083a]BH_Check@vmkernel#nover+0x185 stack: 0x412266b5bde8, 0x412266b5bd88,
This issue is resolved in this release.
-
Unused vSphere Distributed Switch (VDS) ports are not cleared from the .dvsData directory on the datastores
During vMotion of a virtual machine which has a vNIC connected to VDS, port files from the vMotion source host is not cleared in .dvsData directory even after a while.
This issue is resolved in this release.
-
ESXi host might fail with a purple diagnostic screen due to a conflict between two DVFilter processes
If two DVF filter processes attempt to manage a single configuration variable at the same time, while one process frees the filter configuration and the other process attempts to lock it, this might lead to an ESXi host failure.
This issue is resolved in this release.
-
The return value of net.throughput.usage in vCenter performance chart and vmkernel are contradictory
In vCenter performance chart, the net.throughput.usage related value is in Kilobytes, but the same value is returned in bytes in the vmkernel. This leads to incorrect representation of values in the vCenter performance chart.
This issue is resolved in this release.
-
ESXi hosts might stop responding when the L2Echo function is unable to handle network traffic
When the Network Healthcheck feature is enabled, the L2Echo function might not be able to handle high network traffic and the ESXi hosts might stop responding with a purple diagnostic screen and a backtrace similar to the following:
cpu4:8196)@BlueScreen: PCPU 1: no heartbeat (2/2 IPIs received)
cpu4:8196)Code start: 0x418024600000 VMK uptime: 44:20:54:02.516
cpu4:8196)Saved backtrace from: pcpu 1 Heartbeat NMI
cpu4:8196)0x41220781b480:[0x41802468ded2]SP_WaitLockIRQ@vmkernel#nover+0x199 stack: 0x3b
cpu4:8196)0x41220781b4a0:[0x4180247f0253]Sched_TreeLockMemAdmit@vmkernel#nover+0x5e stack: 0x20
cpu4:8196)0x41220781b4c0:[0x4180247d0100]MemSched_ConsumeManagedKernelMemory@vmkernel#nover+0x1b stack: 0x0
cpu4:8196)0x41220781b500:[0x418024806ac5]SchedKmem_Alloc@vmkernel#nover+0x40 stack: 0x41220781b690
...
cpu4:8196)0x41220781bbb0:[0x4180247a0b13]vmk_PortOutput@vmkernel#nover+0x4a stack: 0x100
cpu4:8196)0x41220781bc20:[0x418024c65fb2]L2EchoSendPkt@com.vmware.net.healthchk#1.0.0.0+0x85 stack: 0x4100000
cpu4:8196)0x41220781bcf0:[0x418024c6648e]L2EchoSendPort@com.vmware.net.healthchk#1.0.0.0+0x4b1 stack: 0x0
cpu4:8196)0x41220781bfa0:[0x418024c685d9]L2EchoRxWorldFn@com.vmware.net.healthchk#1.0.0.0+0x7f8 stack: 0x4122
cpu4:8196)0x41220781bff0:[0x4180246b6c8f]vmkWorldFunc@vmkernel#nover+0x52 stack: 0x0
This issue is resolved in this release.
-
Provisioning and customizing a virtual machine might result in loss of network connection
When you provision and customize a virtual machine from a template on a vDS with Ephemeral Ports, the virtual machine might lose connection from the network.
The error messages similar to the following might be written to the log files:
2013-08-05T06:33:33.990Z| vcpu-1| VMXNET3 user: Ethernet1 Driver Info: version = 16847360 gosBits = 2 gosType = 1, gosVer = 0, gosMisc = 02013-08-05T06:33:35.679Z| vmx| Msg_Post: Error
2013-08-05T06:33:35.679Z| vmx| [msg.mac.cantGetPortID] Unable to get dvs.portId for ethernet0
2013-08-05T06:33:35.679Z| vmx| [msg.mac.cantGetNetworkName] Unable to get networkName or devName for ethernet0
2013-08-05T06:33:35.679Z| vmx| [msg.device.badconnect] Failed to connect virtual device Ethernet0.
This issue is resolved in this release.
-
VMXNET3 resets frequently when Receive Side Scaling (RSS) is enabled in a virtual machine
When Receive Side Scaling (RSS) is enabled on a virtual machine, the VMXNET3 network driver resets frequently causing virtual machine to lose network connectivity for a brief period of time.
This issue is resolved in this release. The VMXNET3 network driver is updated in this release.
-
Virtual machines lose network connectivity while performing snapshot commit operation
Virtual machines lose network connectivity and do not respond during snapshot consolidations after a virtual machine backup. The virtual machines are reported as not busy during this period.
This issue is resolved in this release.
-
USB controllers cannot be configured to be DirectPath I/O passthrough
If you configure a USB host controller device to be DirectPath I/O passthrough and reboot the host, the settings do not persist.
This issue is resolved in this release. However, do not configure USB controllers as passthrough if you boot ESXi hosts from USB devices such as an SD card.
Security
-
Update to OpenSSL library addresses multiple security issues
The ESXi userworld OpenSSL library is updated to version openssl-0.9.8y to resolve multiple security issues.
The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2013-0169 and CVE-2013-0166.
-
Update to libxml2 library addresses multiple security issues
The ESXi userworld libxml2 library has been updated to resolve a security issue.
The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2013-0338 to this issue.
-
NULL pointer dereference while handling the Network File Copy (NFC) traffic
VMware ESXi and ESX contain a NULL pointer dereference in
the handling of the Network File Copy (NFC) traffic. To
exploit this vulnerability, an attacker must intercept and
modify the NFC traffic between ESXi/ESX and the client.
Exploitation of the issue might lead to a Denial of Service.
The Common Vulnerabilities and Exposures project (cve.mitre.org) has
assigned the name CVE-2014-1207 to this issue.
-
EHCI does not validate port fields correctly
Due to a flaw in the handling of invalid ports, it is possible
to cause the VMX process to fail. This vulnerability may allow a
guest user to crash the VMX process resulting in a partial denial of
service on the host.
The Common Vulnerabilities and Exposures project (cve.mitre.org) has
assigned the name CVE-2014-1208 to this issue.
Server Configuration
-
NetApp has requested an update to the SATP claim rule which prevents the iSCSI from entering an unresponsive state
NetApp has requested an update to the SATP claim rule which prevents the reservation conflict for a Logical Unit Number (LUN). The updated SATP claim rule uses the reset option to clear the reservation from the LUN and allows other users to set the reservation option.
This issue is resolved in this release.
-
When you boot from a Storage Area Network it might take longer to discover the boot device depending on the network bandwidth
When you boot from a SAN and the boot device discovery process takes more time to complete, ESXi host /bootbank will point to /tmp. With this release we have provided bootDeviceRescanTimeout parameter to ESXi command line which can be passed before the boot process to configure the timeout value to resolves the issue.
This issue is resolved in this release.
-
Attempts to apply a host profile might fail while removing multiple NFS datastores
When you attempt to remove multiple NFS datastores using the Host Profile option, an error occurs because there might be a datastore that has already been deleted.
This issue is resolved in this release.
-
vSwitch Failback and Failover Order settings are not copied to the host when you apply host profile
When you attach a host profile to an host, vSwitch properties such as Failback and Failover Order extracted from a reference host is not applied to the host.
This issue is resolved in this release.
-
ESXi host reports incorrect login time for ha-eventmgr in hostd.log file
The ESXi host might incorrectly display the last login time for the root as 1970, this information is displayed for ha-eventmgr on the ESXi hosts’ Web client. In this release the login time is calculated using the system time which solves the issue.
This issue is resolved in this release.
-
ESXi host fails with a host profile error when changing the network failover setting to beacon probing
While applying the host profile to an ESXi host, if you attempt to change the network fail over detection to beacon probing, the ESXi host fails with an error message similar to the following:
Associated host profile contains NIC failure criteria settings that cannot be applied to the host
This issue is resolved in this release.
-
The bandwidthCap option of an ESXi host might not work on guest operating system
On an ESXi host when bandwidthCap and throughputCap are set at the same time, the I/O throttling option might not work on virtual machines. This happens because of incorrect logical comparison while setting the throttle option in scsi scheduler.
This issue is resolved in this release.
-
When you query the ESXi host through SNMP the host reports incorrect CPU load average value
When you perform SNMP query to the ESXi host for CPU load average, instead of calculating the hrProcessorLoad for the past minute the hrProcessorLoad calculates the CPU load for the entire life time. This results in the host reporting incorrect CPU load average value.
This issue is resolved in this release.
-
ESXi host displays incorrect values for the resourceCpuAllocMax and resourceMemAllocMax system counter
When you attempt to retrieve the value for the resourceCpuAllocMax and resourceMemAllocMax system counters against the host system, the ESXi host returns incorrect values. This issue is observed on a vSphere Client connected to vCenter Server.
This issue is resolved in this release
-
The ESXi host profile reports a compliance failure with Option Annotaitons.WelcomeMessage does not match the specified Criteria error message
Every time you add a text in the Annotations.WelcomeMessage and create an ESXi host profile and apply the same host profile to other hosts, then the other host reports an error message similar to the following:
Option Annotaitons.WelcomeMessage does not match the specified Criteria
This issue is resolved in this release.
-
Multiple ESXi servers might stop responding in the vCenter Server
When high volume of parallel HTTP GET /folder URL requests are sent to hostd, the hostd service fails. This stops from adding the host back to the vCenter Server. An error message similar to the following might be displayed:
Unable to access the specified host, either it doesn't exist, the server software is not responding, or there is a network problem.
This issue is resolved in this release.
-
Unable to add permissions from vSphere Client to Active Directory users or groups on an ESXi host joined to an Active Directory (AD) domain
You might not be able to add permissions to AD users or groups because the domain name is not available for selection in the Domain drop-down menu.
This issue is resolved in this release.
-
SNMP traps not received on hosts where SNMP is enabled and third-party CIM providers are installed on the server
When the monitored hardware status is changed on an ESXi host that has SNMP enabled and third Party CIM Providers installed on the server, you might not receive SNMP trap. Messages similar to the following are logged in the syslog file:
2013-07-11T05:24:39Z snmpd: to_sr_type: unable to convert varbind type '71'
2013-07-11T05:24:39Z snmpd: convert_value: unknown SR type value 0
2013-07-11T05:24:39Z snmpd: parse_varbind: invalid varbind with type 0 and value: '2'
2013-07-11T05:24:39Z snmpd: forward_notifications: parse file '/var/spool/snmp/1373520279_6_1_3582.trp' failed, ignored
This issue is resolved in this release.
-
Cluster wide storage rescan from vCenter Server causes ESXi hosts and virtual machines to become unresponsive
When you perform certain operations that directly or indirectly involve Virtual Machine File System (VMFS), usually in an ESXi cluster environment sharing a large number of VMFS datastores, you might encounter the following problems:
- ESXi host stops responding intermittently
- ESXi host gets disconnected from the vCenter Server
- Virtual machine stops responding intermittently
- The virtual machines that are part of Microsoft Cluster Service (MSCS) stops responding, resulting in failover
- Host communication errors occur during Site Recovery Manager (SRM) data recovery and failover tests
This issue is resolved in this release.
-
TCP, SSL remote logging does not restart automatically after a network interruption
VMware ESXi 5.x host configured with TCP/SSL remote syslog stops sending syslogs to remote log server when the network connection to the remote log server is interrupted and restored.
This issue is resolved by adding a Default Network Retry Timeout and the host retries to send syslog after Default Network Retry Timeout. The default value of Default Network Retry Timeout is 180 seconds. The command esxcli system syslog config set --default-timeout=
can be used to change the default value.
This issue is resolved in this release.
-
ESXi host stops responding experiencing a purple diagnostic screen with a PCPU XX didn't have a heartbeat error
The ESXi host might become unresponsive during vMotion with the trace as displayed below:
2013-07-18T17:55:06.693Z cpu24:694725)0x4122e7147330:[0x418039b5d12e]Migrate_NiceAllocPage@esx#nover+0x75 stack: 0x4122e7147350
2013-07-18T17:55:06.694Z cpu24:694725)0x4122e71473d0:[0x418039b5f673]Migrate_HeapCreate@esx#nover+0x1ba stack: 0x4122e714742c
2013-07-18T17:55:06.694Z cpu24:694725)0x4122e7147460:[0x418039b5a7ef]MigrateInfo_Alloc@esx#nover+0x156 stack: 0x4122e71474f0
2013-07-18T17:55:06.695Z cpu24:694725)0x4122e7147560:[0x418039b5df17]Migrate_InitMigration@esx#nover+0x1c2 stack: 0xe845962100000001
...
2013-07-18T17:55:07.714Z cpu25:694288)WARNING: Heartbeat: 646: PCPU 27 didn't have a heartbeat for 7 seconds; *may* be locked up.
2013-07-18T17:55:07.714Z cpu27:694729)ALERT: NMI: 1944: NMI IPI received. Was eip(base):ebp:cs
This occurs while running vMotion on a host under memory overload.
This issue is resolved in this release.
-
ESXi host fails to to get information related to maximum LBA count and maximum unmap descriptor count when you execute unmap command
When you execute the unmap command, ESXi host retrieves maximum LBA count and maximum unmap descriptor count when disk is opened and caches it. The host uses this information to validate requests by virtual SCSI. Earlier the host failed to retrieve the required information.
This issue is resolved in this release.
-
ESXi server fails to boot when you enable TXT function in server uEFI BIOS
If you enable TPM/TXT in system uEFI BIOS of an ESXi host, then both Tboot and ESXi host might fail to boot. This issue is observed on IBM 3650 M4 server.
This issue is resolved in this release.
-
ESXi host fails with a purple diagnostic screen when handling SCSI mid-layer frame in SCSIPathTimeoutHandlerFn
If the timer fires even before SCSI mid-layer frame gets initialized with vmkCmd option in the SCSI function, then ESXi host might fail with a purple diagnostic screen and an error message similar to the following might be displayed:
@BlueScreen: #PF Exception 14 in world 619851:PathTaskmgmt IP 0x418022c898f6 addr 0x48
Code start: 0x418022a00000 VMK uptime: 9:08:16:00.066
0x4122552dbfb0:[0x418022c898f6]SCSIPathTimeoutHandlerFn@vmkernel#nover+0x195 stack:
0x4122552e7000
0x4122552dbff0:[0x418022c944dd]SCSITaskMgmtWorldFunc@vmkernel#nover+0xf0 stack: 0x0
This issue is resolved in this release.
-
ESXi host does not report configuration issues if core dump partition and core dump collector services are not configured
If an ESXi host is not configured with a core dump partition and neither configured to direct core dumps to a dump collector server, we might lose important troubleshooting information in case of host panic. With this release we have included a check for this configuration in host agent and hence host Summary tab and Events tab will show the issue if core dump partition or dump collector service is not configured.
This issue is resolved in this release.
-
Applying Host Profile with preferred path settings fails on the destination host with Invalid Path Value error
Configure Host1 with Path Selection Policy (PSP) set as fixed, and configure a preferred path, then extract profile from Host1 and apply the profile to Host2. During the initial check of the Host2 profile the Host Profile Plugin module might encounter Invalid Path Value error and report the invalid path value.
This issue is resolved in this release.
-
Host compliance check fails with an error related to extracting indication configuration
When there is invalid CIM subscription in the system and you perform a host profile compliance check against a host, an error message similar to the following might be displayed:
Error extracting indication configuration: (6, u'The requested object could not be found')
You cannot apply the host profile on the host.
This issue is resolved in this release. You can apply host profiles even when there is a invalid indication in the host profile.
-
VMkernel fails when virtual machine monitor returns an invalid machine page number
When virtual machine monitor (VMM) passes a VPN value to read a page, VMkernel might fail to find a valid machine page number for that VPN value. This results in the host failing with a purple diagnostic screen. This issue occurs when VMM sends a bad VPN while performing a monitor core dump during a VMM failure.
This issue is resolved in this release.
-
When you upgrade ESXi 5.0.x to ESXi 5.1.x the host loses the NAS datastores and other configurations
When you upgrade ESXi 5.0.x to ESXi 5.1.x using vCenter Update Manager, the ESXi host loses the NAS datastore entries containing the string Symantec. In this release, the script is modified to remove unnecessary entries from the configuration files during upgrade, which resolves this issue.
This issue is resolved in this release.
Storage
-
Request Sense command sent from a guest operating system does not return any data
When a SCSI Request Sense command is sent to Raw Device Mapping (RDM) in Physical Mode from a guest operating system, sometimes the returned sense data is NULL (zeroed). The issue only occurs when the command is sent from the guest operating system.
This issue is resolved in this release.
-
Cloning and cold migration of virtual machines with large VMDK and snapshot files might fail
You might be unable to clone and perform cold migration of virtual machines with large virtual machine disk (VMDK) and snapshot files to other datastores. This issue occurs when the vpxa process exceeds the limit of memory allocation during cold migration. As a result, the ESXi host loses the connection from the vCenter Server and the migration fails.
This issue is resolved in this release.
-
False Device Busy (D:0x8) status messages might be displayed in the VMkernel log files when vmklinux incorrectly sets the device status
When vmklinux incorrectly sets the device status, false Device Busy (D:0x8) status messages similar to the following are displayed in VMkernel log files:
2013-04-04T17:56:47.668Z cpu0:4012)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x412441541f80) 0x16, CmdSN 0x1c9f from world 0 to dev "naa.600601601fb12d00565065c6b381e211"
failed H:0x0 D:0x8 P:0x0 Possible sense data: 0x0 0x0 0x0
This generates false alarms, as the storage array does not send any Device Busy status message for SCSI commands.
This issue is resolved in this release by correctly pointing to Host Bus Busy (H:0x2) status messages for issues in the device drivers similar to the following:
2013-04-04T13:16:27.300Z cpu12:4008)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x4124819c2f00) 0x2a, CmdSN 0xfffffa80043a2350 from world 4697 to dev "naa.600601601fb12d00565065c6b381e211"
failed H:0x2 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0
This issue is resolved in this release.
-
vCenter Server or vSphere Client might get disconnected from the ESXi host during VMFS datastore creation
vCenter Server or vSphere Client might get disconnected from the ESXi host during VMFS datastore creation.
This issue occurs when hostd fails with the error message similar to the following in the hostd log file:
Panic: Assert Failed: \"matchingPart != __null\".
The hostd fails during VMFS datastore creation on disks with certain partition configuration that requires partition alignment.
This issue is resolved in this release.
-
Provisioned space value for NFS datastore incorrectly calculated and result in false alarms
Under certain conditions, provisioned space value for NFS datastore might be incorrectly calculated and false alarms might be generated.
This issue is resolved in this release.
-
Unable to bring data stores back online after a Permanent Device Loss
After a Permanent Device Loss (PDL), you are unable to bring data stores back online due to some open file handles on the volume.
This issue is resolved in this release.
-
Loading nfsclient module during ESXi boot process causes load failure of other modules
During the ESXi boot process, loading the nfsclient module might hold lock on the esx.conf file for a long time if there is a delay in the host name resolution for Network File System (NFS) mount
points. This might cause failure of other module loads such as migrate, ipmi, or others.
This issue is resolved in this release.
-
Unable to access the VMFS datastore or some files
You might find the Virtual Machine File System (VMFS) datastore missing from the vCenter Server's Datastore tab or an event similar to the following displayed in the Events tab:
XXX esx.problem.vmfs.lock.corruptondisk.v2 XXX or At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too.
The following log message is displayed in the VMkernel log:
[lockAddr 36149248] Invalid state: Owner 00000000-00000000-0000-000000000000 mode 0 numHolders 0 gblNumHolders 4294967295ESC[7m2013-05-12T19:49:11.617Z cpu16:372715)WARNING: DLX: 908: Volume 4e15b3f1-d166c8f8-9bbd-14feb5c781cf ("XXXXXXXXX") might be damaged on the disk. Corrupt lock detected at offset 2279800: [type 10c00001 offset 36149248 v 6231, hb offset 372ESC[0$
You might also see the following message logged in the vmkernel.log file :
2013-07-24T05:00:43.171Z cpu13:11547)WARNING: Vol3: ValidateFS:2267: XXXXXX/51c27b20-4974c2d8-edad-b8ac6f8710c7: Non-zero generation of VMFS3 volume: 1
This issue is resolved in this release.
-
ESXi 5.1 hosts might fail with a purple diagnostic screen and error messages related to the DevFSFileClose function
When multiple threads try to close the same device at the same time, ESXi 5.1 hosts might fail with a purple diagnostic screen and a backtrace similar to the following:
cpu1:16423)@BlueScreen: #PF Exception 14 in world 16423:helper1-0 IP 0x41801ac50e3e addr 0x18PTEs:0x0;
cpu1:16423)Code start: 0x41801aa00000 VMK uptime: 0:09:28:51.434
cpu1:16423)0x4122009dbd70:[0x41801ac50e3e]FDS_CloseDevice@vmkernel#nover+0x9 stack: 0x4122009dbdd0
cpu1:16423)0x4122009dbdd0:[0x41801ac497b4]DevFSFileClose@vmkernel#nover+0xf7 stack: 0x41000ff3ca98
cpu1:16423)0x4122009dbe20:[0x41801ac2f701]FSS2CloseFile@vmkernel#nover+0x130 stack: 0x4122009dbe80
cpu1:16423)0x4122009dbe50:[0x41801ac2f829]FSS2_CloseFile@vmkernel#nover+0xe0 stack: 0x41000fe9a5f0
cpu1:16423)0x4122009dbe80:[0x41801ac2f89e]FSS_CloseFile@vmkernel#nover+0x31 stack: 0x1
cpu1:16423)0x4122009dbec0:[0x41801b22d148]CBT_RemoveDev@
#
+0x83 stack: 0x41000ff3ca60
cpu1:16423)0x4122009dbef0:[0x41801ac51a24]FDS_RemoveDev@vmkernel#nover+0xdb stack: 0x4122009dbf60
cpu1:16423)0x4122009dbf40:[0x41801ac4a188]DevFSUnlinkObj@vmkernel#nover+0xdf stack: 0x0
cpu1:16423)0x4122009dbf60:[0x41801ac4a2ee]DevFSHelperUnlink@vmkernel#nover+0x51 stack: 0xfffffffffffffff1
cpu1:16423)0x4122009dbff0:[0x41801aa48418]helpFunc@vmkernel#nover+0x517 stack: 0x0
cpu1:16423)0x4122009dbff8:[0x0]
stack: 0x0
cpu1:16423)base fs=0x0 gs=0x418040400000 Kgs=0x0
cpu1:16423)vmkernel 0x0 .data 0x0 .bss 0x0
cpu1:16423)chardevs 0x41801ae70000 .data 0x417fc0000000 .bss 0x417fc00008a0
This issue is resolved in this release.
-
Logical Unit Number reset fails in fc_fcp_resp function while handling FCP_RSP_INFO
In a LUN RESET task with NetApp targets, it is observed that LUN RESET fails. The fc_fcp_resp() is not completing the LUN RESET task since fc_fcp_resp assumes that the FCP_RSP_INFO is 8 bytes with the 4 byte reserved field, however, in case of NetApp targets the FCP_RSP to LUN RESET only has 4 bytes of FCP_RSP_INFO. This leads to fc_fcp_resp error without completing the task. Reset the host to fix the issue.
This issue is resolved in this release
- ESXi user cannot disable Fibre Channel over Ethernet on ports which does not used for boot from SAN device
In an ESXi host when you boot from a Fibre Channel over Ethernet Storage Area Network, you might not be able to disable the FCoE on any port which are not even used to present the FCoE boot LUN.
This issue is resolved in this release.
-
Software iSCSI parameter changes are not logged in the syslog.log file
Starting with this release, changes to the software iSCSI session and connection parameters are now logged in the syslog.log file and it includes previous and new parameter values.
This issue is resolved in this release.
-
Virtual machines might stop responding during snapshot consolidation
Attempts to consolidate snapshots on vCenter Server might fail with error message similar to the following:
hierarchy is too deep
This issue occurs when the virtual machine has 255 or more snapshots created by vSphere Replication.
This issue is resolved in this release.
-
Attempts to take snapshots of virtual machines with shared virtual disks might fail when Oracle Clustureware is used
When multi-writer option is used on shared virtual disks along with the Oracle Real Application Cluster (RAC) option of Oracle Clusterware software, attempts to take snapshots of virtual machines without their memory might fail, and the virtual machines might stop running. Log files might contain entries similar to the following:
Resuming virtual disk scsi1:5 failed. The disk has been modified since a snapshot was taken or the virtual machine was suspended.
This issue might be observed with other cluster management softwares as well.
This issue is resolved in this release.
-
Host profile might not be correctly applied to ESXi hosts with AutoDeploy using stateless caching
When you use Auto Deploy with stateless caching, the host profile might not be correctly applied to the ESXi host. As a result, the host does not become compliant when it joins vCenter Server.
This issue occurs when there are approximately 30 or more VMFS datastores.
This issue does not occur when you manually apply the host profile.
This issue is resolved in this release.
Upgrade and Installation
-
Host remediation against bulletins that have only Reboot impact fails
During the remediation process of a stand alone ESXi host against a patch baseline, which consists of bulletins that have only Reboot impact, Update Manager fails to power off or suspend the virtual machines that are on the host. As a result the host cannot enter maintenance mode, and the remediation cannot be completed.
This issue is resolved in bulletins created in this release and later.
vCenter Server, vSphere Client, and vSphere Web Client
-
Customized performance chart does not provide the option of displaying the aggregate virtual disk stats for virtual machine objects
When you use the virtual disk metric to view performance charts, you only have the option of viewing the virtual disk performance charts for the available virtual disk objects.
This release allows you to view virtual disk performance charts for virtual machine objects as well. This is useful when you need to trigger alarms based on virtual disk usage by virtual machines.
-
The Summary tab might display incorrect values for provisioned space for Hosts with VAAI NAS
When a virtual disk with Thick Provision Lazy zeroed format is created on an ESXi host with VAAI NAS, in the Datastores and Datastore Cluster view, the Provisioned space displayed on the Summary tab might be double of the provisioned storage set for the virtual disk.
For example if the provisioned storage is 75GB, the provisioned space displayed might be around 150 GB.
This issue is resolved in this release.
Virtual Machine Management
-
In a virtual machine the WPF application displays items incorrectly when the 3D software rendering option is enabled
When you install a Windows Presentation Foundation (WPF) application on a virtual machine and enable the 3D software rendering option, the WPF application might have inconsistency while displaying some images.
This issue is resolved in this release.
-
Guest operating system fails with bug check code PAGE_FAULT_IN_NONPAGED_AREA during auto-install of win2000 Server with 8 or more virtual CPUs
When you install Windows 2000 Server on a virtual machine, the guest operating system fails with a blue diagnostic screen and a PAGE_FAULT_IN_NONPAGED_AREA error message. This issue is observed in Windows 2000 virtual machine with eight or more virtual CPUs.
This issue is resolved in this release.
-
Attempts to export a virtual machine as OVF fails with a timeout error
When you attempt to export a virtual machine in an Open Virtualization Format (OVF) if the virtual machine has large portion of empty blocks on the disk, for example 210GB or more, and uses an Ext 2 or an Ext 3 file system, the operation times out.
This issue is resolved in this release.
The hot remove of a shared non persistent disk takes more time when the disk is attached(or shared) with another powered on virtual machine
When you add shared non persistent read only disks to a virtual machine, the virtual machine might stop responding. This happens because the virtual machine opens the read only disks in exclusive mode.
This issue is resolved in this release.
vMotion and Storage vMotion
- Incremental backup might fail with FileFault error for QueryChangedDiskAreas after moving a VMDK to a different volume using vMotion for a CBT-enabled virtual machine
On a Changed Block Training (CBT) enabled virtual machine, when you perform QueryChangedDiskAreas after moving a virtual machine disk
(VMDK) to a different volume using Storage vMotion, as CBT information gets re-initialized without discarding all ChangeID references it might result in a FileFault error similar to the following:
2012-09-04T11:56:17.846+02:00 [03616 info 'Default' opID=52dc4afb] [VpxLRO] -- ERROR task-internal-4118 -- vm-26512 --
vim.VirtualMachine.queryChangedDiskAreas: vim.fault.FileFault:
--> Result:
--> (vim.fault.FileFault) {
--> dynamicType = <unset>,
--> faultCause = (vmodl.MethodFault) null, file =
--> "/vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
--> msg = "Error caused by file
/vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
--> }
To resolve the issue, CBT tracking is deactivated and reactivated after moving a virtual machine disk (VMDK) to a different datastore using Storage vMotion, it must discard all changeId reference and take a full backup to utilize CBT for further incremental backup.
This issue occurs when a library function incorrectly re-initializes the disk change tracking facility.
This issue is resolved in this release.
VMware HA and Fault Tolerance
-
Attempts to enable a High Availability cluster might fail after a single host is placed in maintenance mode
Attempts to enable a High Availability (HA) cluster might fail after a single host of the same HA cluster is placed in maintenance mode. This issue occurs when the value of the
inode descriptor number is not set correctly in ESXi root file system (Visorfs), and as a result, the stat calls on those inodes fail.
This issue is resolved in this release.
VMware Tools
-
VMware Tools installer will not install vmhgfs module when Virtual Machine Communication Interface (VMCI) and VMCI Sockets (VSOCK) are upstreamed after enabling vmhgfs installation
If you attempt to install VMware Tools when VMCI and VSOCK are upstreamed, the installer will not install vmhgfs module when you enable this module installation. This issue is observed on Linux operating system with kernel version 3.9 or later.
This issue is resolved in this release.
-
When VMCI & VSOCK are upstreamed, VMware Tools installer will not replace them any more
If you run /etc/vmware-tools/service.sh restart command to check services when VMCI and VSOCK are upstreamed and VMware Tools are installed, the status of VMCI might be displayed as failed. This issue is observed on Linux operating system with kernel version 3.9 or later.
This issue is resolved in this release.
-
Virtual machines with Windows operating system might display warning messages when you upgrade VMware Tools
On an ESXi 5.1 host when you upgrade VMware Tools to version 9.0, virtual machines with Windows operating system might display a warning message similar to the following:
Failed registration of app type 2 (Signals) from plugin unity
This issue is resolved in this release.
-
Solaris 10 virtual machines become unresponsive when you install VMware Tools from ESXi 5.1
If you install VMware Tools from ESXi 5.1 to a Solaris 10 virtual machine, then the virtual machine might become unresponsive and might display a message similar to the following:
Wait for the CDE desktop to start
This issue is resolved in this release.
-
Attempts to set up a filter rule that contains the drive letter for a volume with an unsupported file system might result in the failure of Windows Server 2003 or Windows XP virtual machine with a blue screen
When you attempt to set up a filter rule that contains the drive letter for a volume with an unsupported file system, Windows Server 2003 or Windows XP virtual machine might fail with a blue screen and might display error messages similar to the following:
Error code 1000007e, parameter1 c0000005, parameter2 baee5d3b, parameter3 baa69a64, parameter4 baa69760.
...
Data:
0000: 53 79 73 74 65 6d 20 45 System E
0008: 72 72 6f 72 20 20 45 72 rror Er
0010: 72 6f 72 20 63 6f 64 65 ror code
0018: 20 31 30 30 30 30 30 37 10000070020: 65 20 20 50 61 72 61 6d e Param
0028: 65 74 65 72 73 20 63 30 eters c
00030: 30 30 30 30 30 35 2c 20 000005,
0038: 62 61 65 65 35 64 33 62 baee5d3b
0040: 2c 20 62 61 61 36 39 61 , baa69a
0048: 36 34 2c 20 62 61 61 36 64, baa6
0050: 39 37 36 30 9760
This issue mostly occurs when the Q:\ drive letter created by Microsoft App-V solution is added in the filter rule.
This issue is resolved in this release.
-
VMware Tools is updated to provide pre-built modules for SUSE Linux Enterprise 11 SP3, Oracle Linux 5.x with 2.6.39-200/300/400 kernels and Oracle Linux 6.x with 2.6.39-200/300/400 kernels
-
vmblock modules might fail to compile when you install VMware Tools on Linux kernel versions later than 3.5
When you install VMware Tools on Linux kernel versions later than 3.5, vmblock modules might fail to compile and the ..make: *** [vmblock.ko] Error 2 error message is displayed.
This issue is resolved in this release. The fix stops compilation of vmblock module on kernels 3.5 and later as support for FUSE is enabled from kernel 2.6.32 and later.
- VMware Tools fails to start if the VMCI and VSOCK drivers are installed with the clobber option
If you install VMware Tools with the --clobber-kernel-modules=vmci,vsock option, the VMware Tools service fails to start and an error message similar to the following is displayed:
Creating a new initrd boot image for the kernel.
update-initramfs: Generating /boot/initrd.img-3.9.0-rc2-custom
vmware-tools-thinprint start/running
initctl: Job failed to start
Unable to start services for VMware Tools
Execution aborted.
This issue is resolved in this release. You can no longer install VMCI and VSOCK drivers after they are upstreamed with Linux kernel versions later than 3.9.
-
VMware Tools might stop running in Novell NetWare 6.5 virtual machines
VMware Tools might stop running in Novell NetWare 6.5 virtual machines. This might cause high CPU utilization and DRS migration.
This issue is resolved in this release.
-
VMware config.pl fails to print reboot recommendation after VMXNET 3 and PVSCSI upgrade
On any Linux guest operating system, when you install VMware tools and upgrade it to the latest version, the VMware Tools installation fails to print recommendations for reloading the VMXNET/VMXNET 3 and PVSCSI driver modules or reboot the virtual machine at the end of the installation.
This issue is resolved in this release.
-
Guest operating system event viewer displays warning messages after you install VMware Tools
After you install VMware Tools, if you attempt to do a RDP to a Windows virtual machine, some of the plugins might display a warning message in the Windows event log. The warning message indicates the failure to send remote procedure calls to the host.
This issue is resolved in this release.
-
Memory leak in vmtoolsd.exe processes on Windows guest operating systems
If you install VMware Tools on Windows guest operating systems (XP and later) and start the Windows Performance Monitor when the system is idle, you might notice that the two vmtoolsd.exe processes have memory leak.
This issue is resolved in this release.
-
VMware Tools SVGA driver might cause Windows 8 virtual machines to stop responding
After you install VMware Tools, Windows 8 virtual machines might stop responding when the guest operating system is restarted.
This issue is resolved in this release.
-
vmtoolsd fails while retrieving virtual machine information with --cmd argument
Sometimes vmtoolsd fails when you invoke it with the command line option --cmd. This issue is observed on vmtoolsd versions 9.0.1 and 9.0.5.
This issue is resolved in this release.
-
When you install VMware Tools using Operating System Specific packages the /tmp/vmware-root directory fills up with vmware-db.pl.* files
When you install VMware Tools using OSPs you can see an increase in the number of log files present in the /tmp/vmware-root directory. This issue is observed on SUSE Linux Enterprise Server 11 Service Pack 2 and RedHat Enterprise Linux 6 virtual machines.
This issue is resolved in this release.
-
VMware Tools installation results in broken symbolic links in /usr/bin/vmware-user-wrapper
When you install or upgrade VMware Tools, the /usr/bin/vmware-user-wrapper symbolic links might be broken. This issue is observed on Linux guest operating system.
This issue is resolved in this release.
-
Guestinfo plugin fails to collect IPv4 routing table when VLANs are in use in guest operating system
If you configure the SUSE Linux Enterprise Server 11 virtual machine with VLAN interfaces of the form eth0.100, eth0.200 and then install VMware Tools. The guest info plugin fails to parse the /proc/net/route and /proc/net/ipv6 and logs multiple messages to the system log.
This issue is resolved in this release.
-
vCenter protect agent displays a warning message for unsigned executable during VMware Tools update
When you attempt VMware Tools update on an older ESXi 5.1.x Server, the vCenter protect agent displays the Unregistering VSS driver warning message indicating the use of an unsigned executable. Including the executable to the list of files copied to the install folder resolves this issue.
This issue is resolved in this release.
Known Issues
Installation and Upgrade Issues
-
Inventory objects might not be visible after upgrading a vCenter Server Appliance configured with Postgres database
When a vCenter Server Appliance configured with Postgres database is upgraded from 5.0 Update 2 to 5.1 Update 1, inventory objects such as datacenters, vDS and so on that existed before the upgrade might not be visible. This issue occurs when you use vSphere Web Client to connect to vCenter Server appliance.
Workaround: Restart the Inventory service after upgrading vCenter Server Appliance.
-
For Auto Deploy Stateful installation, cannot use firstdisk argument of ESX on systems that have ESX/ESXi already installed on USB
You configure the host profile for a host that you want to set up for Stateful Install with Auto Deploy. As part of configuration, you select USB as the disk, and you specify esx as the first argument. The host currently has ESX/ESXi installed on USB. Instead of installing ESXi on USB, Auto Deploy installs ESXi on the local disk.
Workaround: None.
-
Auto Deploy PowerCLI cmdlets Copy-DeployRule and Set-DeployRule require object as input
When you run the Copy-DeployRule or Set-DeployRule cmdlet and pass in an image profile or host profile name, an error results.
Workaround: Pass in the image profile or host profile object.
-
Applying host profile that is set up to use Auto Deploy with stateless caching fails if ESX is installed on the selected disk
You use host profiles to set up Auto Deploy with stateless caching enabled. In the host profile, you select a disk on which a version of ESX (not ESXi) is installed. When you apply the host profile, an error that includes the following text appears.
Expecting 2 bootbanks, found 0
Workaround: Remove the ESX software from the disk, or select a different disk to use for stateless caching.
-
vSphere Auto Deploy no longer works after a change to the IP address of the machine that hosts the Auto Deploy server
You install Auto Deploy on a different machine than the vCenter Server, and change the IP address of the machine that hosts the Auto Deploy server. Auto Deploy commands no longer work after the change.
Workaround: Restart the Auto Deploy server service.
net start vmware-autodeploy-waiter
If restarting the service does not resolve the issue, you might have to reregister the Auto Deploy server. Run the following command, specifying all options.
autodeploy-register.exe -R -a vCenter-IP -p vCenter-Port -u user_name -w password -s setup-file-path
-
On HP DL980 G7, ESXi hosts do not boot through Auto Deploy when onboard NICs are used
You cannot boot an HP DL980 G7 system using Auto Deploy if the system is using the onboard (LOM Netxen) NICs for PXE booting.
Workaround: Install an add-on NIC approved by HP on the host, for example HP NC3 60T and use that NIC for PXE booting.
-
A live update with esxcli fails with a VibDownloadError
A user performs two updates in sequence, as follows.
- A live install update using the esxcli software profile update or esxcli vib update command.
- A reboot required update.
The second transaction fails. One common failure is signature verification, which can be checked only after the VIB is downloaded.
Workaround: Resolving the issue is a two-step process.
- Reboot the ESXi host to clean up its state.
- Repeat the live install.
-
ESXi scripted installation fails to find the kickstart (ks) file on a CD-ROM drive when the machine does not have any NICs connected
When the kickstart file is on a CD-ROM drive in a system that does not have any NICs connected, the installer displays the error message: Can't find the kickstart file on cd-rom with path <path_to_ks_file>.
Workaround: Reconnect the NICs to establish network connection, and retry the installation.
-
Scripted installation fails on the SWFCoE LUN
When the ESXi installer invokes installation using the kickstart (ks) file, all the FCoE LUNs have not yet been scanned and populated by the time installation starts. This causes the scripted installation on any of the LUNs to fail. The failure occurs when the https, http, or ftp protocol is used to access the kickstart file.
Workaround: In the %pre section of the kickstart file, include a sleep of two minutes:
%pre --interpreter=busybox
sleep 120
-
Potential problems if you upgrade vCenter Server but do not upgrade Auto Deploy server
When you upgrade vCenter Server, vCenter Server replaces the 5.0 vSphere HA agent (vmware-fdm) with a new agent on each ESXi host. The replacement happens each time an ESXi host reboots. If vCenter Server is not available, the ESXi hosts cannot join a cluster.
Workaround: If possible, upgrade the Auto Deploy server.
If you cannot upgrade the Auto Deploy server, you can use Image Builder PowerCLI cmdlets included with vSphere PowerCLI to create an ESXi 5.0 image profile that includes the new vmware-fdm VIB. You can supply your hosts with that image profile.
- Add the ESXi 5.0 software depot and add the software depot that contains the new vmware-fdm VIB.
Add-EsxSoftwareDepot C:\Path\VMware-Esxi-5.0.0-buildnumber-depot.zip
Add-EsxSoftwareDepot http://vcenter server/vSphere-HA-depot
- Clone the existing image profile and add the vmware-fdm VIB.
New-EsxImageProfile -CloneProfile "ESXi-5.0.0-buildnumber-standard" -name "Imagename"
Add-EsxSoftwarePackage -ImageProfile "ImageName" -SoftwarePackage vmware-fdm
- Create a new rule that assigns the new image profile to your hosts and add the rule to the ruleset.
New-DeployRule -Name "Rule Name" -Item "Image Name" -Pattern "my host pattern"
Add-DeployRule -DeployRule "Rule Name"
- Perform a test and repair compliance operation for the hosts.
Test-DeployRuleSetCompliance Host_list
-
If Stateless Caching is turned on, and the Auto Deploy server becomes unavailable, the host might not automatically boot using the stored image
In some cases, a host that is set up for stateless caching with Auto Deploy does not automatically boot from the disk that has the stored image if the Auto Deploy server becomes unavailable. This can happen even if the boot device that you want is next in logical boot order. What precisely happens depends on the server vendor BIOS settings.
Workaround: Manually select the disk that has the cached image as the boot device.
-
During upgrade of ESXi 5.0 hosts to ESXi 5.1 with ESXCLI, VMotion and Fault Tolerance (FT) logging settings are lost
On an ESXi 5.0 host, you enable vMotion and FT for a port group. You upgrade the host by running the command esxcli software profile update . As part of a successful upgrade, the vMotion settings and the logging settings for Fault Tolerance are returned to the default settings, that is, disabled.
Workaround: Use vSphere Upgrade Manager to upgrade the hosts, or return vMotion and Fault Tolerance to their pre-upgrade settings manually.
Networking Issues
-
On an SR-IOV enabled ESXi host, virtual machines associated with virtual functions might fail to start
When SR-IOV is enabled on ESXi 5.1 hosts with Intel ixgbe NICs and if several virtual functions are enabled in this environment, some virtual machines might fail to start. Messages similar to the following are displayed in the vmware.log file:
2013-02-28T07:06:31.863Z| vcpu-1| I120: Msg_Post: Error
2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
2013-02-28T07:06:31.863Z| vcpu-1| I120+ PCIPassthruChangeIntrSettings: 0a:17.3 failed to register interrupt (error code 195887110)
2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5122262e-ab950f8e-cd4f-b8ac6f917d68/VMLibRoot/VMLib-RHEL6.2-64-HW7-default-3-2-1361954882/vmwar
2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support.
2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86]
2013-02-28T07:06:31.863Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support".
2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement.
Workaround: Reduce the number of virtual functions associated with the affected virtual machine and start it.
System stops responding during TFTP/HTTP transfer when provisioning ESXi 5.1 or 5.0 U1 with Auto Deploy
When provisioning ESXi 5.1 or 5.0 U1 with Auto Deploy on Emulex 10GbE NC553i FlexFabric 2 Ports using the latest open-source gPXE, the system stops responding during TFTP/HTTP transfer.
Emulex 10GbE PCI-E controllers are memory-mapped controllers. The PXE/UNDI stack running on this controller must switch to big real mode from real mode during the PXE TFTP/HTTP transfer to program the device-specific registers located above 1MB in order to send and receive packets through the network. During this process, CPU interrupts are inadvertently enabled, which causes the system to stop responding when other device interrupts are generated during the CPU mode switching.
Workaround: Upgrade the NIC firmware to build 4.1.450.7 or later.
Changes to the number of ports on a standard virtual switch do not take effect until host is rebooted
When you change the number of ports on a standard virtual switch, the changes do not take effect until you reboot the host. This differs from the behavior with a distributed virtual switch, where changes to the number of ports take effect immediately.
When changing the number of ports on a standard virtual switch, ensure that the total number of ports on the host, from both standard and distributed switches, does not exceed 4096.
Workaround: None.
- Administrative state of a physical NIC not reported properly as down
Administratively setting a physical NIC state to down does not conform to IEEE standards. When a physical NIC is set to down through the virtual switch command, it causes two known problems:
-
ESXi experiences a traffic increase it cannot handle that wastes network resources at the physical switch fronting ESXi and in resources in ESXi itself.
-
The NIC behaves in an unexpected way. Operators expect to see the NIC powered down, but the NIC displays as still active.
VMware recommends using the using ESXCLI network down -n vmnicN command with the following caveats:
-
This command turns off the driver only. It does not power off the NIC. When the ESXi physical network adapter is viewed from the management interface of the physical switch fronting the ESXi system, the standard switch uplink still appears to be active.
-
The administrative state of a NIC is not visible in the ESXCLI or UI. You must remember when debugging to check the state by examining /etc/vmware/esx.conf.
-
The SNMP agent report administrative state, however it will report incorrectly if the NIC was set to down when the operational state was down to begin with. It reports the admin state correctly if the NIC was set to down when the operational state was active.
Workaround: Change the administrative state on the physical switch fronting the ESXi system to down instead of using the virtual switch command.
-
Linux driver support changes
Device drivers for VMXNET2 or VMXNET (flexible) virtual NICs are not available for virtual machines running Linux kernel version 3.3 and later.
Workaround: Use a VMXNET3 or e1000 virtual NIC for virtual machines running Linux kernel version 3.3 and later.
-
vSphere 5.0 network I/O control bandwidth allocation is not distributed fairly across multiple uplinks
In vSphere 5.0, if a networking bandwidth limit is set on a resource pool while using network I/O control, this limit is enforced across a team of uplinks at the host level. This bandwidth cap is implemented by a token distribution algorithm that is not designed to fairly distribute bandwidth between multiple uplinks.
Workaround: vSphere 5.1 network I/O control limits have been narrowed to a per uplink basis.
-
Mirrored Packet Length setting could cause a remote mirroring source session not to function
When you configure a remote mirroring source session with the Mirrored Packet Length option set, the destination does not receive some mirrored packets. However, if you disable the option, packets are again received.
If the Mirrored Packet Length option is set, packets longer than the specified length are truncated and packets are dropped. Lower layer code will not do fragmentation and recalculate checksum for the dropped packets. Two things might cause packets to drop:
-
The Mirrored Packet Length is greater than the maximum transmission unit (MTU)
If TSO is enabled in your environment, the original packets could be very large. After being truncated by the Mirrored Packet Length, they are still larger than the MTU, so they are dropped by the physical NIC.
- Intermediate switches perform L3 check
Some truncated packets can have the wrong packet length and checksum. Some advanced physical switches check L3 information and drop invalid packets. The destination does not receive the packets.
Workaround:
Enabling more than 16 VMkernel network adapters causes vMotion to fail
vSphere 5.x has a limit of 16 VMkernel network adapters enabled for vMotion per host. If you enable more than 16 VMkernel network adapters for vMotion on a given host, vMotion migrations to or from that host might fail. An error message in the VMkernel logs on ESXi says Refusing request to initialize 17 stream ip entries , where the number indicates how many VMkernel network adapters you have enabled for vMotion.
Workaround: Disable vMotion VMkernel network adapters until only a total of 16 are enabled for vMotion.
vSphere network core dump does not work when using a nx_nic driver in a VLAN environment
When network core dump is configured on a host that is part of a VLAN, network core dump fails when the NIC uses a QLogic Intelligent Ethernet Adapters driver (nx_nic). Received network core dump packets are not tagged with the correct VLAN tag if the uplink adapter uses nx_nic.
Workaround: Use another uplink adapter with a different driver when configuring network coredump in a VLAN.
-
If the kickstart file for a scripted installation calls a NIC already in use, the installation fails
If you use a kickstart file to set up a management network post installation, and you call a NIC that is already in use from the kickstart file, you see the following error message: Sysinfo error on operation returned status: Busy. Please see the VMkernel log for detailed error information.
The error is encountered when you initiate a scripted installation on one system with two NICs: a NIC configured for SWFCoE/SWiSCSI, and a NIC configured for networking. If you use the network NIC to initiate the scripted installation by providing either netdevice=<nic> or BOOTIF=<MAC of the NIC> at boot-options, the kickstart file uses the other NIC, netdevice=<nic configured for SWFCoE / SWiSCSI> , in the network line to configure the management network.
Installation (partitioning the disks) is successful, but when the installer tries to configure the management-network for the host with the network parameters provided in the kickstart file, it fails because the NIC was in use by SWFCoE/SWiSCSI .
Workaround: Use an available NIC in the kickstart file for setting up a management network after installation.
-
Virtual machines running ESX that also use VMXNET3 as the pNIC might crash
Virtual machines running ESX as a guest that also use VMXNET3 as the pNIC might crash because support for VMXNET3 is experimental. The default NIC for an ESX virtual machine is e1000, so this issue is encountered only when you override the default and choose VMXNET3 instead.
Workaround: Use e1000 or e1000e as the pNIC for the ESX virtual machine.
-
Error message is displayed when a large number of dvPorts is in use
When you power on a virtual machine with dvPort on a host that already has a large number of dvPorts in use, an Out of memory or Out of resources error is displayed. This can also occur when you list the switches on a host using an esxcli command.
Workaround: Increase the dvsLargeHeap size.
- Change the host's advanced configuration option:
- esxcli command: esxcfg-advcfg -s /Net/DVSLargeHeapMaxSize 100
- Virtual Center: Browse to Host configuration -> Software Panel -> Advanced Settings -> Under "Net", change the DVSLargeHeapMaxSize value from 80 to 100.
- vSphere 5.1 Web Client: Browse to Manage host -> Settings -> Advanced System Settings -> Filter. Change the DVSLargeHeapMaxSize value from 80 to 100.
- Capture a host profile from the host. Associate the profile with the host and update the answer file.
- Reboot the host to confirm the value is applied.
Note: The max value for /Net/DVSLargeHeapMaxSize is 128.
Please contact VMware Support if you face issues during a large deployment after changing /Net/DVSLargeHeapMaxSize to 128 and logs display either of the following error messages:
Unable to Add Port; Status(bad0006)= Limit exceeded
Failed to get DVS state from vmkernel Status (bad0014)= Out of memory
-
ESXi fails with Emulex BladeEngine-3 10G NICs (be2net driver)
ESXi might fail on systems that have Emulex BladeEngine-3 10G NICs when a vCDNI-backed network pool is configured using VMware vCloud Director. You must obtain an updated device driver from Emulex when configuring a network pool with this device.
Workaround: None.
Storage Issues
-
RDM LUNs get detached from virtual machines that migrate from VMFS datastore to NFS datastore
If you use the vSphere Web Client to migrate virtual machines with RDM LUNs from VMFS datastore to NFS datastore, the migration operation completes without any error or warning messages, but the RDM LUNs get detached from the virtual machine after migration. However, the migration operation creates a vmdk file with size same as that of RDM LUN on NFS datastore, to replace the RDM LUN.
If you use vSphere Client, an appropriate error message is displayed in the compatibility section of the migration wizard.
Workaround: None
-
VMFS5 datastore creation might fail when you use an EMC Symmetrix VMAX/VMAXe storage array
If your ESXi host is connected to a VMAX/VMAXe array, you might not be able to create a
VMFS5 datastore on a LUN presented from the array. If this is the case, the following error will appear: An error occurred during host configuration. The error is a result of the ATS (VAAI) portion of the Symmetrix Enginuity Microcode (VMAX 5875.x) preventing a new datastore on a previously unwritten LUN.
Workaround:
- Disable Hardware Accelerated Locking on the ESXi host.
- Create a VMFS5 datastore.
- Reenable Hardware Accelerated Locking on the host.
Use the following tasks to disable and reenable the Hardware Accelerated Locking parameter.
In the vSphere Web Client
- Browse to the host in the vSphere Web Client navigator.
- Click the Manage tab, and click Settings.
- Under System, click Advanced System Settings.
- Select VMFS3.HardwareAcceleratedLocking and click the Edit icon.
- Change the value of the VMFS3.HardwareAcceleratedLocking parameter:
In the vSphere Client
- In the vSphere Client inventory panel, select the host.
- Click the Configuration tab, and click Advanced Settings under Software.
- Change the value of the VMFS3.HardwareAcceleratedLocking parameter:
-
Attempts to create a GPT partition on a blank disk might fail when using Storagesystem::updateDiskPartitions()
You can use the Storagesystem::computeDiskPartitionInfo API to retrieve disk specification, and then use the disk specification to label the disk and create a partition with Storagesystem::updateDiskPartitions().
However, if the disk is initially blank and the target disk format is GPT, your attempts to create the partition might fail.
Workaround: Use DatastoreSystem::createVmfsDatastore instead to label and partition a blank disk, and to create a VMFS5 datastore.
-
Attempts to create a diagnostic partition on a GPT disk might fail
If a GPT disk has no partitions, or the tailing portion of the disk is empty, you might not be able to create a diagnostic partition on the disk.
Workaround: Avoid using GPT-formatted disks for diagnostic partitions.
If you must use an existing blank GPT disk for the diagnostic partition, convert the disk to the MBR format.
- Create a VMFS3 datastore on the disk.
- Remove the datastore.
The disk format changes from GPT to MBR.
-
ESXi cannot boot from a FCoE LUN that is larger than 2TB and accessed through an Intel FCoE NIC
When you install ESXi on a FCoE boot LUN that is larger than 2TB and is accessed through an Intel FCoE NIC, the installation might succeed. However, when you attempt to boot your ESXi host, the boot fails. You see the error messages: ERROR: No suitable geometry for this disk capacity! and ERROR: Failed to connect to any configured disk! at BIOS time.
Workaround: Do not install ESXi on a FCoE LUN larger than 2TB if it is connected to the Intel FCoE NIC configured for FCoE boot. Install ESXi on a FCoE LUN that is smaller than 2TB.
Server Configuration Issues
-
Applying host profiles might fail when accessing VMFS folders through console
If a user is accessing the VMFS datastore folder through the console at the same time a host
profile is being applied to the host, the remediation or apply task might fail.
This failure occurs when stateless caching is enabled on the host profile or if an auto deploy
installation occurred.
Workaround: Do not access the VMFS datastore through the console while remediating the host
profile.
-
Leading white space in login banner causes host profile
compliance failure
When you edit a host profile and change the text for the Login Banner (Message
of the Day) option, but add a leading white space in the banner text, a
compliance error occurs when the profile is applied. The compliance error
Login banner has been modified appears.
Workaround: Edit the host profile and remove the leading white space from
the Login Banner policy option.
-
Host profile extracted from ESXi 5.0 host fails to apply to ESX 5.1 host with Active
Directory enabled
When applying a host profile with Active Directory enabled that was originally extracted from an ESXi 5.0
host to an ESX 5.1 host, the apply task fails. Setting the maximum memory size for the likewise system
resource pool might cause an error to occur. When Active Directory is enabled, the services in the
likewise system resource pool consume more than the default maximum memory limit for ESXi 5.0 captured
in an ESXi 5.0 host profile. As a result, applying an ESXi 5.0 host profile fails during attempts to set
the maximum memory limit to the ESXi 5.0 levels.
Workaround: Perform one of the following:
- Manually edit the host profile to increase the maximum memory limit for the likewise group.
- From the host profile editor, navigate to the Resource Pool folder, and view host/vim/vmvisor/plugins/likewise.
- Modify the Maximum Memory (MB) setting from 20 (the ESXi 5.0 default) to 25 (the ESXi 5.1 default).
- Disable the subprofile for the likewise group. Do one of the following:
- In the vSphere Web Client, edit the host profile and deselect the checkbox for the Resource Pool
folder. This action disables all resource pool management. You can disable this specifically for the
host/vim/vmvisor/plugins/likewise item under the Resource Pool folder.
- In the vSphere Client, right-click the host profile and select Enable/Disable Profile Configuration...
from the menu.
-
Host gateway deleted and compliance failures occur when ESXi 5.0.x host profile
re-applied to stateful ESXi 5.1 host
When an ESXi 5.0.x host profile is applied to a freshly installed ESXi 5.1 host, the profile compliance
status is noncompliant. After applying the same profile again, it deletes the host's gateway IP and
the compliance status continues to show as noncompliant with the IP route configuration doesn't
match the specification status message.
Workaround: Perform one of the following workarounds:
- Login to the host through DCUI and add the default gateway manually with the following esxcli command:
esxcli network ip route ipv4 add --gateway xx.xx.xx.xx --network yy.yy.yy.yy
- Extract a new host profile from the ESX 5.1 host after applying the ESX 5.0 host profile once.
Migrate the ESX 5.1 host to the new ESX 5.1-based host profile.
-
Compliance errors might occur after stateless caching enabled on USB disk
When stateless caching to USB disks is enabled on a host profile, compliance errors might occur after
remediation. After rebooting the host so that the remediated changes are applied, the stateless
caching is successful, but compliance failures continue.
Workaround: No workaround is available.
-
Hosts with large number of datastores time out while applying host profile with
stateless caching enabled
A host that has a large number of datastores times out when applying a host profile with
stateless caching enabled.
Workaround: Use the vSphere Client to increase the timeout:
- Select Administrator > vCenter Server Settings.
- Select Timeout Settings.
- Change the values for Normal Operations and Long Operations to 3600 seconds.
-
Cannot extract host profile from host when IPv4 is disabled on vmknics
If you remove all IPv4 addresses from all vmknics, you cannot extract a host profile
from that host. This action affects hosts provisioned with auto-deploy the most, as host profiles
is the only way to save the host configuration in this environment.
Workaround: Assign as least one vmknic to one IPv4 address.
-
Applying host profile fails when applying a host profile extracted
from an ESXi 4.1 host on an ESXi 5.1 host
If you set up a host with ESXi 4.1, extract a host profile from this host (with vCenter
Server), and attempt to attach a profile to an ESXi 5.1 host, the operation fails when
you attempt to apply the profile. You might receive the following error: NTP service
turned off.
The NTPD service could be running (on state) even without providing an NTP server in /etc/ntp.conf for ESXi 4.1.
ESXi 5.1 needs an explicit NTP server for the service to run.
Workaround:
Turn on the NTP service by adding a valid NTP server in /etc/ntp.conf and restart
the NTP daemon on the 5.1 host. Confirm that the service persists after the reboot. This action
ensures the NTP service is synched for the host and the profile being applied to it.
-
Host profile shows noncompliant after profile successfully applied
This problem occurs when extracting a host profile from an ESXi 5.0 host and applying it to an ESXi 5.1 host
that contains a local SAS device. Even when the host profile remediation is successful, the host profile compliance
shows as noncompliant.
You might receive errors similar to the following:
- Specification state absent from host: device naa.500000e014ab4f70 Path Selection Policy needs to be set to VMW_PSP_FIXED
- Specification state absent from host: device naa.500000e014ab4f70 parameters needs to be set to State = on" Queue Full Sample Size = "0" Queue Full Threshold = "0"
The ESXi 5.1 host profile storage plugin filters out local SAS device for PSA and NMP device configuration,
while ESXi 5.0 contains such device configurations. This results in a missing device when applying the older
host profile to a newer host.
Workaround: Manually edit the host profile, and remove the PSA and NMP device configuration entries
for all local SAS devices. You can determine if a device is a local SAS by entering the following esxcli command:
esxcli storage core device list
If the following line is returned, the device is a local SAS:
Is Local SAS Device
-
Default system services always start on ESXi hosts provisioned with Auto Deploy
For ESXi hosts provisioned with Auto Deploy, the Service Startup Policy in the Service Configuration section of the associated host profile is not fully honored.
In particular, if one of the services that is turned on by default on ESXi has a Startup Policy value of off, that service still starts at the boot time on the ESXi
host provisioned with Auto Deploy.
Workaround: Manually stop the service after booting the ESXi host.
-
Information retrieval from VMWARE-VMINFO-MIB does not happen correctly after an snmpd restart
Some information from VMWARE-VMINFO-MIB might be missing during SNMPWalk after you restart the snmpd daemon using /etc/init.d/snmpd restart from the ESXi Shell.
Workaround:
Do not use /etc/init.d/snmpd restart. You must use the esxcli system snmp set --enable command to start or stop the SNMP daemon. If you used /etc/init.d/snmpd restart to restart snmpd from the ESXi Shell, restart Hostd, either from DCUI or by using /etc/init.d/hostd restart from the ESXi Shell.
vCenter Server and vSphere Client Issues
-
Enabling or Disabling View Storage Accelerator might cause ESXi hosts to lose connectivity to vCenter Server
If VMware View is deployed with vSphere 5.1, and a View administrator enables or disables View Storage Accelerator in a desktop pool, ESXi 5.1 hosts might lose connectivity to vCenter Server 5.1.
The View Storage Accelerator feature is also called Content Based Read Caching. In the View 5.1 View Administrator console, the feature is called Host caching.
Workaround: Do not enable or disable View Storage Accelerator in View environments deployed with vSphere 5.1.
Virtual Machine Management Issues
-
Virtual Machine compatibility upgrade from ESX 3.x and later (VM version 4) incorrectly configures the
Windows virtual machine Flexible adapter to the Windows system default driver
If you have a Windows guest operating system with a Flexible network adapter that is configured for
the VMware Accelerated AMD PCnet Adapter driver, when you upgrade the virtual machine compatibility from
ESX 3.x and later (VM version 4) to any later compatibility setting, for example, ESXi 4.x and
later (VM version 7),Windows configures the flexible adapter to the Windows AMD PCNET Family PCI
Ethernet Adapter default driver.
This misconfiguration occurs because the VMware Tools drivers are unsigned and Windows picks up the signed default
Windows driver. Flexible adapter network settings that existed before the compatibility upgrade
are lost, and the network speed of the NIC changes from 1Gbps to 10Mbps.
Workaround: Configure the Flexible network adapters to use the VMXNET driver from the Windows guest OS after you upgrade the
virtual machine's compatibility. If your guest is updated with ESXi5.1 VMware Tools, the VMXNET driver is
installed in the following location: C:\Program Files\Common Files\VMware\Drivers\vmxnet\.
-
When you install VMware Tools on a virtual machine and reboot, the network becomes unusable
On virtual machines with CentOS 6.3 and Oracle Linux 6.3 operating systems, the network becomes unusable after a successful installation of VMware Tools and a reboot of the virtual machine. When you attempt to manually get the IP address from a DHCP server or set a static IP address from the command line, the error Cannot allocate memory appears.
The problem is that the Flexible network adapter, which is used by default, is not a good choice for those operating systems.
Workaround: Change the network adapter from Flexible to E1000 or VMXNET 3, as follows:
- Run the vmware-uninstall-tools.pl command to uninstall VMware Tools.
- Power off the virtual machine.
- In the vSphere Web Client, right-click the virtual machine and select Edit Settings.
- Click Virtual Hardware, and remove the current network adapter by clicking the Remove icon.
- Add a new Network adapter, and choose the adapter type E1000 or VMXNET 3.
- Power on the virtual machine.
- Reinstall VMware Tools.
-
Clone or migration operations that involve non-VMFS virtual disks on ESXi fail with an error
No matter whether you use the vmkfstools command or the client to perform a clone, copy, or migration operation on the virtual disks of hosted formats, the operation fails with the following error message: The system cannot find the file specified.
Workaround: To perform a clone, copy, or migration operation on the virtual disks of hosted formats, you need to load the VMkernel multiextent module into ESXi.
- Log in to ESXi Shell and load the multiextent module.
# vmkload_mod multiextent
- Check if any of your virtual machine disks are of a hosted type. Hosted disks end with the -s00x.vmdk extension.
- Convert virtual disks in hosted format to one of the VMFS formats.
- Clone source hosted disk test1.vmdk to test2.vmdk.
# vmkfstools -i test1.vmdk test2.vmdk -d zeroedthick|eagerzereodthick|thin
- Delete the hosted disk test1.vmdk after successful cloning.
# vmkfstools -U test1.vmdk
- Rename the cloned vmfs type disk test2.vmdk to test1.vmdk.
# vmkfstools -E test2.vmdk test1.vmdk
- Unload the multiextent module.
# vmkload_mod -u multiextent
-
A virtual machine does not have an IP address assigned to it and does not appear operational
This issue is caused by a LUN reset request initiated from a guest OS. This issue is specific to IBM XIV Fibre Channel array with software FCoE configured in ESXi hosts. Virtual machines that reside on the LUN show the following problems:
- No IP address is assigned to the virtual machines.
- Virtual machines cannot power on or power off.
- No mouse cursor is showing inside the console. As a result, there is no way to control or interact with the affected virtual machine inside the guest OS.
Workaround: From your ESXi host, reset the LUN where virtual machines that experience troubles reside.
- Run the following command to get the LUN's information:
# vmkfstools -P /vmfs/volumes/DATASTORE_NAME
- Search for the following line in the output to obtain the LUN's UID:
Partitions spanned (on 'lvm'): eui.001738004XXXXXX:1
eui.001738004XXXXXX is the device UID.
- Run the following command to reset the LUN:
# vmkfstools -L lunreset /vmfs/devices/disks/eui.001738004XXXXXX
- If a non-responsive virtual machine resides on a datastore that has multiple LUNs associated with it, for example, added extents, perform the LUN reset for all datastore extents.
Migration Issues
-
Attempts to use Storage vMotion to migrate multiple linked-clone virtual machines fail
This failure typically affects linked-clone virtual machines. The failure occurs when the size of delta disks is 1MB and the Content Based Read Cache (CBRC) feature has been enabled in ESXi hosts. You see the following error message: The source detected that the destination failed to resume.
Workaround: Use one of the following methods to avoid Storage vMotion failures:
Use 4KB as the delta disk size.
Instead of using Storage vMotion, migrate powered-off virtual machines to a new datastore.
VMware HA and Fault Tolerance Issues
Fault tolerant virtual machines crash when set to record statistics
information on a vCenter Server beta build
The vmx*3 feature allows users to run the stats vmx to collect performance statistics
for debugging support issues. The stats vmx is not compatible when Fault Tolerance is
enabled on a vCenter Server beta build.
Workaround: When enabling Fault Tolerance, ensure that the virtual machine is not
set to record statistics on a beta build of vCenter Server.
Supported Hardware Issues
PCI Unknown Unknown status is displayed in vCenter Server on the Apple Mac Pro server
The hardware status tab in vSphere 5.1 displays Unknown Unknown for some PCI devices on the Apple Mac Pro. This is because of missing hardware descriptions for these PCI devices on the Apple Mac Pro. The display error in the hardware status tab does not prevent these PCI devices from functioning.
Workaround: None.
PCI Unknown Unknown status is displayed in vCenter Server on the AMD PileDriver
The hardware status tab in vSphere 5.1 displays Unknown Unknown for some PCI devices on the AMD PileDriver. This is because of missing hardware descriptions for these PCI devices on the AMD PileDriver. The display error in the hardware status tab does not prevent these PCI devices from functioning.
Workaround: None.
DPM is not supported on the Apple Mac Pro server
The vSphere 5.1 distributed power management (DPM) feature is not supported on the Apple Mac Pro. Do not add the Apple Mac Pro to a cluster that has DPM enabled. If the host enters "Standby" state, it fails to exit the standby state when the power on command is issued and displays an operation timed out error. The Apple Mac Pro cannot wake from the software power off command that is used by vSphere when putting a host in standby state.
Workaround: If the Apple Mac Pro host enters "Standby" you must power on the host by physically pressing the power button.
IPMI is not supported on the Apple Mac Pro server
The hardware status tab in vSphere 5.1 does not display correct data or there is missing data for some of the hardware components on the Apple Mac Pro. This is because IPMI is not supported on the Apple Mac Pro.
Workaround: None.
Miscellaneous Issues
-
After a network or storage interruption, syslog over TCP, syslog over SSL, and storage logging do not restart automatically
After a network or storage interruption, the syslog service does not restart automatically in certain configurations. These configurations include syslog over TCP, syslog over SSL, and the interrupt storage logging.
Workaround: Restart syslog explicitly by running the following command:
esxcli system syslog reload
You can also configure syslog over UDP, which restarts automatically.
|