VMware ESXi 4.0 Update 3 Release Notes
ESXi 4.0 Update 3 | 05 May 2011 | Build 398348
Last Document Update: 19 May 2011 |
These release notes include the following topics:
What's New
The following list highlights some of the enhancements available in this release of VMware ESXi:
- Enhanced APD handling with automatic failover whenever the LUN rescan happens.
- Inclusion of additional driver: Includes the 3ware SCSI 2.26.08.036vm40 driver. For earlier releases, this driver is available only as a separate download.
- Updates VMware Tools WDDM, XPDM, and PVSCSI drivers.
Resolved Issues: In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.
Top of Page
Earlier Releases of ESXi 4.0
Features and known issues from earlier releases of ESXi 4.0 are described in the release notes for each release. To view release notes for earlier releases of ESXi 4.0, click one of the following links:
Top of Page Before You Begin ESXi, vCenter Server, and vSphere Client Version Compatibility
The VMware vSphere Compatibility Matrixes provide details of the compatibility of current and earlier versions of VMware vSphere components, including ESXi, vCenter Server, the vSphere Client, and other VMware products.
Hardware Compatibility
- Learn about hardware compatibility
The Hardware Compatibility Lists are available in the Web-based Compatibility Guide at http://www.vmware.com/resources/compatibility. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides and provides the option to search the guides, and save the search results in PDF format. For example, with this guide, you can verify that your server, I/O, storage, and guest operating systems, are compatible.
Subscribe to be notified of Compatibility Guide updates via 
- Learn about vSphere compatibility:
VMware vSphere Compatibility Matrixes (PDF)
Documentation
The VMware vSphere 4.0 Update 1 documentation has been updated and is applicable for all update releases of vSphere 4.0, including VMware vSphere 4.0 Update 3. See the applicable ESXi documentation page:
Installation and Upgrade
Read the ESXi Installable and vCenter Server Setup Guide for step-by-step guidance about installing and configuring ESXi Installable and vCenter Server or the ESXi Embedded and vCenter Server Setup Guide for step-by-step guidance on setting up ESXi Embedded and vCenter Server.
After successful installation of ESXi Installable or successful boot of ESXi Embedded, several configuration steps are essential. In particular, some licensing, networking, and security configuration is necessary. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.
Future releases of VMware vSphere might not support VMFS version 2 (VMFS2). You should consider upgrading or migrating to VMFS version 3 or higher. See the vSphere Upgrade Guide.
Future releases of VMware vCenter Server might not support installation on 32-bit Windows operating systems. VMware recommends installing vCenter Server on a 64-bit Windows operating system. If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide for instructions about installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.
Management Information Base (MIB) files related to ESXi are not bundled with vCenter Server. Only MIB files specifically related to vCenter Server are shipped with vCenter Server 4.0.x. All MIB files can be downloaded from the VMware Web site at http://www.vmware.com/download.
Upgrading VMware Tools
VMware ESXi 4.0 Update 3 requires a VMware Tools upgrade. VMware Tools is a suite of utilities that enhances the performance of the virtual machine’s guest operating system. Refer to the VMware Tools Resolved Issues for a list of VMware Tools issues resolved in this release of ESXi.
The VMware Tools RPM installer, which is available in the VMware Tools ISO image for Linux guest operating systems, has been deprecated and will be removed in a future ESXi release. Use the tar.gz installer to install VMware Tools on virtual machines with Linux guest operating systems.
To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).
Upgrading or Migrating to ESXi 4.0 Update 3
ESXi 4.0 Update 3 offers the following options for upgrading:
- VMware vCenter Update Manager — You can upgrade from ESXi 3.5 Update 5 and ESXi 4.0.x by using vCenter Update Manager 4.0 Update 3. See the VMware vCenter Update Manager Administration Guide for more information.
- vSphere Host Update Utility — You can upgrade from ESXi 3.5 Update 5 and ESXi 4.0.x by using the vSphere Host Update Utility 4.0 Update 3. See the vSphere Upgrade Guide for more information.
- vihostupdate command of VMware vSphere Command-Line Interface (vSphere CLI) — You can upgrade from ESXi 4.0.x by using vihostupdate command of vSphere CLI. For details, see the vSphere Upgrade Guide and Patch Management Guide.
Supported Upgrade Paths for Host Upgrade to ESXi 4.0 Update 3:
Upgrade Deliverables |
Supported Upgrade Tools |
Supported Upgrade Paths to ESXi 4.0 Update 3 |
ESXi 3.5 Update 5
|
ESXi 4.0,
including
ESXi 4.0 Update 1 and
ESXi 4.0 Update 2 |
| upgrade-from-esxi3.5-4.0_update03-398348.zip |
- VMware vCenter Update Manager with ESX host upgrade baseline
-
vSphere Host Update Utility
|
Yes |
No |
| update-from-esxi4.0-4.0_update03.zip |
vihostupdate |
No |
Yes |
| Patch definitions downloaded from the VMware portal (online) |
- VMware vCenter Update Manager with Host patch baseline
- vSphere Host Update Utility
|
No |
Yes |
Note: Direct upgrade from releases prior to ESXi 3.5 Update 5 is not supported. You should upgrade first to a later version that is supported, and then upgrade to ESXi 4.0 Update 3.
Top of Page
Patches Contained in this Release
This release contains all bulletins for the ESXi Server software that were released prior to the release date of this product. See the VMware Download Patches page.
Patch Release ESXi400-Update03 contains the following individual bulletins:
ESXi400-201105201-UG: Updates Firmware
ESXi400-201105202-UG: Updates Tools
ESXi400-201105203-UG: Updates VI Client
See the documentation listed on the download page for more information on the contents of each patch.
Top of Page
Resolved Issues
This section describes resolved issues in this release in the following subject areas:
Resolved issues previously documented as known issues are marked with the † symbol.
Backup
CIM and API
-
vCenter Server reports the service tag of the blade chassis incorrectly
For blade servers that are running ESXi, vCenter Server incorrectly reports the service tag of the blade chassis instead of that for the blade. On a Dell or IBM blade server that is managed by vCenter Server, the service tag number is listed in the System section of the vCenter Server under vCenter Server > Configuration tab under Processors. This issue occurs due to the incorrect value for the SerialNumber property of the Fixed CIM OMC_Chassis instance.
This issue is resolved in this release.
Guest Operating System
-
Timer interrupts are delivered to the VMI-enabled guest operating system at an excessive rate †
An issue in the Virtual Machine Interface (VMI) timer causes timer interrupts to be delivered to the guest operating system at an excessive rate. This issue might occur after a vMotion migration of a virtual machine that was up for a relatively long time, such as for one hundred days.
This issue is resolved in this release.
- The virtual machine might fail on Windows 7 or Windows Server 2008 R2 guest operating systems in very specific situations when using Windows Media Player †
VMware supports Windows Media Player on both Windows 7 (32 and 64 bit) and Windows Server 2008 R2 guest operating systems. However, in rare circumstances, maximizing the Windows Media Player window while playing a video might cause the virtual machine to crash.
This issue is resolved in this release.
-
For Windows 7 guests, video output might be displayed incorrectly †
Windows Media Player in a Windows 7 guest may incorrectly display video files when the video is scaled.
This issue is resolved in this release.
-
Mouse movements in RDP sessions to Windows virtual machines are affected by MKS console mouse movements
If an administrator uses the vSphere Client to open a console to a Windows virtual machine on which multiple users are logged in through terminal sessions, their mouse movements might become synchronized with the mouse movements of the administrator.
This issue is resolved in this release.
- vmrun command fails to run on Ubuntu 10.04 guest operating systems
The listProcessesInGuest vmrun command might fail to run on Ubuntu 10.04 guest operating systems. The guest operating system displays an error message similar to the following:
Error: Invalid user name or password for the guest OS
This issue is resolved in this release.
- Windows guest operating system might fail with vmx_fb.dll error
Windows guest operating systems on which you have installed the VMware driver for Windows XP Display Driver Model (XPDM) might fail with a vmx_fb.dll error and display a blue screen.
The issue is resolved in this release.
-
The default settings for memory size in the RHEL and Ubuntu 32-bit and 64-bit virtual machines is updated
The minimum and default memory sizes for RHEL and Ubuntu 32-bit and 64-bit guest operating systems are updated as follows:
For RHEL 6 32-bit, minimum memory is updated from 1GB to 512MB, default recommended memory is updated from 2GB to1GB, maximum recommended memory is updated from 64GB to 16GB, and hard disk size is updated from 8GB to 16 GB.
For RHEL 6 64-bit, default recommended memory is updated from 2GB to1GB, and hard disk size is updated from 8GB to 16GB.
For Ubuntu 32-bit and 64-bit, minimum recommended memory is updated from 64MB to 256MB.
Miscellaneous
- ESXi host intermittently loses connection with vCenter Server due to socket exhaustion
This issue is resolved in the ESXi 4.0 Update 3 release. ESXi400-201104401-SG already contains the fix for this issue. This patch was released before ESXi 4.0 Update 3. For more information see KB 1037259.
-
Performance chart data for networking displays incorrect information
The stacked per-virtual machine performance chart data for networking displays incorrect information. You can access the chart from Chart Options in the Advanced Settings of the Performance tab. The network transmit and receive statistics of a virtual machine connected to the Distributed Virtual Switch (DVS) are interchanged and incorrectly displayed.
This issue is resolved in this release.
-
vSphere Client displays incorrect provisioned space for a powered-off virtual machine
The ESXi host does not consider the memory reservation while calculating the provisioned space of a powered-off virtual machine. As a result, the vSphere Client might display a discrepancy in the provisioned space values while the virtual machine is powered on or powered off.
This issue is resolved in this release.
-
ESXi host might fail when USB devices are connected to EHCI controllers
When you connect USB devices (including baseboard management controllers (BMC) such as iLo or DRAC-based USB devices) to EHCI controllers, an issue with memory corruption might sometimes cause an ESXi host to fail with a purple screen and display error messages similar to the following:
2010-12-11T10:11:30.683Z cpu1:2606)@BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4803 - Usage error in dlmalloc
2010-12-11T10:11:30.684Z cpu1:2606)Code start: 0x418031000000 VMK uptime: 0:03:35:09.476
2010-12-11T10:11:30.685Z cpu1:2606)0x412208b87b10:[0x418031069302]Panic@vmkernel#nover+0xa9 stack: 0x410014b12e80
2010-12-11T10:11:30.687Z cpu1:2606)0x412208b87b30:[0x4180310285eb]DLM_free@vmkernel#nover+0x602 stack: 0x1
2010-12-11T10:11:30.688Z cpu1:2606)0x412208b87b70:[0x418031038ef1]Heap_Free@vmkernel#nover+0x164 stack: 0x3e8
This issue is resolved in this release.
-
ESXi host fails and displays a purple screen due to a TTY-related race condition
An ESXi host might fail with a purple screen that displays an error message similar to the following when multiple threads trying to use the same TTY cause a race condition:
ASSERT bora/vmkernel/user/userTeletype.c:969
cr2=0xff9fcfec cr3=0xa114f000 cr4=0x128
This issue is resolved in this release.
-
Large syslog messages are not logged on ESXi hosts
On ESXi hosts, log messages in excess of about 2048 bytes might not be written to /var/log/messages.
This issue is resolved in this release.
-
The VMkernel log reports the log spew: usbdev_ioctl: REAPURBDELAY (KB 1029451)
-
Performing a downgrade or upgrade by using the Dell Update Package (DUP) utility might fail on ESXi 4.0 hosts
This issue is resolved in this release.
Networking
- Traffic shaping values are reset after ESXi host restart
If you configure the traffic shaping value on a vDS or vSwitch to greater than 4GB, the value is reset to below 4GB after you restart the ESXi host. This issue causes the traffic shaper to shape traffic using much lower values, resulting in very low network bandwidth. For example, if you set the traffic shaping to a maximum bandwidth of 6Gbps, the value changes to about 1.9Gbps after you restart the ESXi host.
This issue is resolved in this release.
- bnx2 NIC reset causes ESXi host to fail
A bnx2 NIC reset might fail due to firmware synchronization timeout, and in turn cause the ESXi host to fail with a purple screen. The following is an example of a backtrace of this issue:
0x4100c178f7f8:[0x41802686d9f4]bnx2_poll+0x167 stack: 0x4100c178f838
0x4100c178f878:[0x4180267a3ec6]napi_poll+0xed stack: 0x4100c178f898
0x4100c178f938:[0x41802642abaf]WorldletBHHandler+0x426 stack: 0x417fe726c680
0x4100c178f9a8:[0x4180264218f7]BHCallHandlersInt+0x106 stack: 0x4100c178f9f8
0x4100c178f9f8:[0x418026421dc1]BH_Check+0x144 stack: 0x4100c178fae0
0x4100c178fa28:[0x41802642e524]IDT_HandleInterrupt+0x12b stack: 0x418040000000
0x4100c178fa48:[0x41802642e9f2]IDT_IntrHandler+0x91 stack: 0x0
0x4100c178fb28:[0x4180264a9b16]gate_entry+0x25 stack: 0x1
This issue is resolved in this release. The fix forces the NIC to a link-down state when the firmware synchronization times out, and thus prevents the ESXi host from failing The following message is written to the VMkernel log:
bnx2: Resetting... NIC initialization failed: vmnicX.
- e1000e 1.1.2-NAPI driver added
In earlier releases, Intel e1000e 1.1.2-NAPI driver was not bundled with ESXi but provided separately for download. In this release, e1000e 1.1.2-NAPI driver is bundled with ESXi.
- ESXi hosts might fail with bnx2x
If you use VMware ESXi host with Broadcom bnx2x driver, you might see the following symptoms:
- The ESXi host might lose network connectivity frequently.
- The ESXi host might stop responding with a purple diagnostic screen that displays messages similar to the following:
[0x41802834f9c0]bnx2x_rx_int@esx:nover: 0x184f stack: 0x580067b28, 0x417f80067b97, 0x
[0x418028361880]bnx2x_poll@esx:nover: 0x1cf stack: 0x417f80067c64, 0x4100bc410628, 0x
[0x41802825013a]napi_poll@esx:nover: 0x10d stack: 0x417fe8686478, 0x41000eac2b90, 0x4
- The ESXi host might stop responding with a purple diagnostic screen that displays messages similar to the following:
0:18:56:51.183 cu10:4106)0x417f80057838:[0x4180016e7793]PktContainerGetPkt@vmkernel:nover+0xde stack: 0x1
0:18:56:51.184 pu10:4106)0x417f80057868:[0x4180016e78d2]Pkt_SlabAlloc@vmkernel:nover+0x81 stack: 0x417f800578d8
0:18:56:51.184 cpu10:4106)0x417f80057888:[0x4180016e7acc]Pkt_AllocWithUseSizeNFlags@vmkernel:nover+0x17 stack: 0x417f800578b8
0:18:56:51.185 cpu10:4106)0x417f800578b8:[0x41800175aa9d]vmk_PktAllocWithFlags@vmkernel:nover+0x6c stack: 0x1
0:18:56:51.185 cpu10:4106)0x417f800578f8:[0x418001a63e45]vmklnx_dev_alloc_skb@esx:nover+0x9c stack: 0x4100aea1e988
0:18:56:51.185 cpu10:4106)0x417f80057918:[0x418001a423da]__netdev_alloc_skb@esx:nover+0x1d stack: 0x417f800579a8
0:18:56:51.186 cpu10:4106)0x417f80057b08:[0x418001b6c0cf]bnx2x_rx_int@esx:nover+0xf5e stack: 0x0
0:18:56:51.186 cpu10:4106)0x417f80057b48:[0x418001b7e880]bnx2x_poll@esx:nover+0x1cf stack: 0x417f80057c64
0:18:56:51.187 cpu10:4106)0x417f80057bc8:[0x418001a6513a]napi_poll@esx:nover+0x10d stack: 0x417fc1f0d078
- The bnx2x driver or firmware sends panic messages and writes a backtrace with messages similar to the following in the /var/log/message log file:
vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3379(vmnic0)]MC assert!
vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3384(vmnic0)]driver assert
vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_panic_dump:634(vmnic0)]begin crash dump
The issue is resolved in this release.
- ESXi fails with a purple diagnostic screen that displays a spin count exceeded error
A networking issue might cause an ESXi host to fail with a purple diagnostic screen that displays an error message similar to the following:
Spin count exceeded (rentry) -possible deadlock with PCPU6
This issue occurs if the system is sending traffic and modifying the routing table at the same time.
This issue is resolved in this release.
- NIC teaming policy does not work properly on legacy vSwitches
If you configure the port group policies of NIC teaming for parameters such as load balancing, network failover detection, notify switches, or failback, and then restart the ESXi host, the ESXi host might send traffic only through one physical NIC.
This issue is resolved in this release.
- Cisco Discovery Protocol (CDP) network location information is missing on ESXi hosts
The vSphere Client and the ESXi command line utility might not display CDP network location information.
This issue is resolved in this release.
- ESXi hosts might not start or might cause some devices become inaccessible when using NetXen 1G NX3031 devices or multiple 10G NX2031 devices
When you install new NetXen NICs on an ESXi 4.0 host or when you upgrade from ESXi 3.5 to ESXi 4.0, you might see an error message similar to the following on the ESXi 4.0 host: Out of Interrupt vectors. On ESXi hosts, where NetXen 1G and NX2031 10G devices do not support NetQueue, the ESXi host might run out of MSI-X interrupt vectors. ESXi hosts might not start or other devices (such as storage devices) might become inaccessible because of this issue.
The issue is resolved in this release.
Server Configuration
- ESXi host does not generate warning message if Syslog is not configured
If you do not configure Syslog settings on an ESXi host, the host does not generate an alarm or configuration error message.
This issue is resolved in this release. Now a warning message similar to the following appears in the Summary tab of the ESXi host if you do not configure Syslog:
Configuration Issues
Issue detected on [host name] in : Warning: Syslog not configured.Please check Syslog options under Configuration.Software.AdvancedSettings in vSphere Client
-
Syslog operations might fail when remote host is inaccessible
Syslog service does not start if it is configured to log to a remote host that cannot be resolved through DNS when the syslogd daemon is started. This causes remote as well as local logging processes to fail.
This issue is resolved in this release. Now if the remote host cannot be resolved, local logging is unaffected. When the syslogd daemon starts, it retries resolving and connecting to the remote host every 10 minutes.
Storage
- Connecting to a tape library through an Adaptec card with aic79xx driver might cause ESXi to fail †
If your ESXi Server is connected to a tape library with an Adaptec HBA (for example: AHA-39320A) that uses the aic79xx driver, the server might fail when the driver tries to access a freed memory area. Such a condition is accompanied by an error message similar to:
Loop 1 frame=0x4100c059f950 ip=0x418030a936d9 cr2=0x0 cr3=0x400b9000.
This issue is resolved in this release.
- When the storage processor of the HP MSA2012fc storage array is reset, critical alerts are erroneously issued †
Resetting the storage processor of the HP MSA2012fc storage array causes the ESX/ESXi native multipath driver (NMP) module to send alerts or critical entries to VMkernel logs. These alert messages indicate that the physical media has changed for the device. However, these messages do not apply to all LUN types. They are only critical for data LUNs but do not apply to management LUNs.
This issue is resolved in this release.
- ESXi host might fail with a purple diagnostic screen while mounting virtual CD-ROM drive from the server Remote Supervisor Adapter (RSA)
This issue is resolved in this release.
- FalconStor IPStor failover results in APD (All Paths Down) state in ESXi 4.x when using Qlogic FC HBAs
When performing an IPStor failover, messages similar to the following are logged in /var/log/vmkernel:
vmkernel: 1:10:57:57.524 cpu4:4219)<3>rport-4:0-0: blocked FC remote port time out: saving binding
vmkernel: 1:10:57:57.712 cpu2:4206)<3>rport-3:0-1: blocked FC remote port time out: saving binding
Qlogic has released an updated driver that is included with this release that addresses WWPN spoofing, the preferred method of FalconStor arrays for handling failover.
- ESXi host fails with a purple diagnostic screen when multiple processes access the same resource during Dentrycache initialization
This issue is resolved in this release.
- ESXi host logs messages in VMkernel log file when storage arrays are rescanned from vSphere Client
ESXi hosts might log messages similar to the following in the VMkernel log file for LUNs not mapped to ESXi hosts: 0:22:30:03.046 cpu8:4315)ScsiScan: 106: Path 'vmhba0:C0:T0:L0': Peripheral qualifier 0x1 not supported. The issue occurs when ESXi hosts start, if you initiate a rescan operation of the storage arrays from the vSphere Client, or every 5 minutes after ESXi hosts start.
This issue is resolved in this release.
- Creation of large .vmdk files on NFS might fail
On NFS storage, attempts to create a virtual disk (.vmdk file) with a large size (for example, more than 1TB) might fail with the error A general system error occurred: Failed to create disk: Error creating disk. This issue occurs when the NFS client does not wait for sufficient time for the NFS storage array to initialize the virtual disk after the RPC parameter of the NFS client times out. By default the timeout value is 10 seconds.
This fix provides a configuration option. After applying this fix, you can make changes to the RPC timeout parameter by using the esxcfg-advcfg -s <Timeout> /NFS/SetAttrRPCTimeout command.
- ESXi host fails during LVM operations
If you perform operations that utilize the Logical Volume Manager (LVM) such as write operations, volume re-signature, volume span, or volume growth, the ESXi host might fail with a purple diagnostic screen. Error messages similar to the following might be written to the logs:
63:05:21:52.692 cpu1:4135)OC: 941: Could not get object from FS driver: Permission denied
63:05:21:52.692 cpu1:4135)WARNING: Fil3: 1930: Failed to reserve volume f530 28 1 4be17337 9c7dae2 23004d45 22b547d 0 0 0 0 0 0 0
63:05:21:52.692 cpu1:4135)FSS: 666: Failed to get object f530 28 2 4be17337 9c7dae2 23004d45 22b547d 4 1 0 0 0 0 0 :Permission denied
63:05:21:52.706 cpu1:4135)WARNING: LVM: 2305: [naa.60060e80054402000000440200000908:1] Disk block size mismatch (actual 512 bytes, stored 0 bytes)
This issue is resolved in this release.
- Some virtual machines stop responding during storage rescan operation when any LUN on the host is in an all-paths-down (APD) state
During storage rescan operations, some virtual machines stop responding when any LUN on the host is in an all-paths-down (APD) state. For more information, see KB 1016626. To work around the issue described in the KB article while using an earlier version of ESXi host, you must manually set the advanced configuration option /VMFS3/FailVolumeOpenIfAPD to 1 before rescanning, and then reset it to 0 after completing the rescan.
This issue is resolved in this release. Now you need not apply the workaround of setting and resetting the advanced configuration option while starting the rescan operation. Virtual machines on non-APD volumes do not fail during a rescan operation, even if some LUNs are in an all-paths-down state.
- ESXi host fails with an error stating that spin count is exceeded
An ESXi host connected to an NFS datastore might fail with a purple diagnostic screen that displays error messages similar to the following:
0x4100c00875f8:[0x41801d228ac8]ProcessReply+0x223 stack: 0x4100c008761c
0x4100c0087648:[0x41801d18163c]vmk_receive_rpc_callback+0x327 stack: 0x4100c0087678
0x4100c0087678:[0x41801d228141]RPCReceiveCallback+0x60 stack: 0x4100a00ac940
0x4100c00876b8:[0x41801d174b93]sowakeup+0x10e stack: 0x4100a004b510
0x4100c00877d8:[0x41801d167be6]tcp_input+0x24b1 stack: 0x1
0x4100c00878d8:[0x41801d16097d]ip_input+0xb24 stack: 0x4100a05b9e00
0x4100c0087918:[0x41801d14bd56]ether_demux+0x25d stack: 0x4100a05b9e00
0x4100c0087948:[0x41801d14c0e7]ether_input+0x2a6 stack: 0x2336
0x4100c0087978:[0x41801d17df3d]recv_callback+0xe8 stack: 0x4100c0087a58
0x4100c0087a08:[0x41801d141abc]TcpipRxDataCB+0x2d7 stack: 0x41000f03ae80
0x4100c0087a28:[0x41801d13fcc1]TcpipRxDispatch+0x20 stack: 0x4100c0087a58
This issue might occur due to a corrupted response received from the NFS server for any read operation that you perform on the NFS datastore.
This issue is resolved in this release.
- ESXi host fails when VMFS snapshot volumes are exposed to multiple hosts in a vCenter Server cluster
An ESXi host might fail with a purple diagnostic screen that displays an error message similar to the following when
VMFS snapshot volumes are exposed to multiple hosts in a vCenter server cluster.
WARNING: LVM: 8703: arrIdx (1024) out of bounds
This issue is resolved in this release.
- ESXi host fails due to megaraid_sas driver issue
When an I/O failure occurs in the megaraid_sas driver because of a device error, the sense buffer is not filled properly. This causes the ESXi host to fail.
- ESXi host fails while performing certain virtual machine operations during storage LUN path failover
During storage LUN path failover, if you perform any virtual machine operation that causes delta disk metadata updates such as create or delete snapshots, the ESXi host might fail with a purple diagnostic screen.
This issue is resolved in this release.
- Error messages logged when scanning for LUNs from iSCSI storage array
ESXi hosts might fail and display a NOT_REACHED bora/modules/vmkernel/tcpip2/freebsd/sys/support/vmk_iscsi.c:648 message on a purple screen when you scan for LUNs from iSCSI storage array by using the esxcfg-swiscsi command or through vSphere Client (Inventory > Configuration > Storage Adapters > iSCSI Software Adapter). This issue might occur if you manually modify the tcp.window.size parameter in /etc/vmware/vmkiscsid/iscsid.conf file.
This fix resolves the issue and also logs warning messages in /var/log/vmkiscsid.log for ESXi if the tcp.window.size parameter is modified to a value lower than its default.
- ESXi host using software iSCSI initiator fails with iscsi_vmk messages
ESXi hosts using software iSCSI initiators might fail with a purple diagnostic screen that displays iscsi_vmk messages similar to the following:
#PF Exception(14) in world 4254:iscsi_trans_ ip 0x41800965fddb addr 0x8
Code starts at 0x418009000000
0x4100c04f7e50:[0x41800965fddb]iscsivmk_ConnShutdown+0x486 stack: 0x410000000000
0x4100c04f7eb0:[0x418009665e93]iscsivmk_StopConnection+0x286 stack: 0x4100c04f7ef0
0x4100c04f7ef0:[0x418009663e4c]iscsivmk_TransportStopConn+0x12b stack: 0x4100c04f7f6c
0x4100c04f7fa0:[0x418009481654]iscsitrans_VmklinkTxWorld+0x36f stack: 0x1d
0x4100c04f7ff0:[0x41800909870b]vmkWorldFunc+0x52 stack: 0x0
0x4100c04f7ff8:[0x0]Unknown stack: 0x0
This issue is known to occur when I/O delays cause I/O requests to time out and cancel.
This issue is resolved in this release.
- Snapshots of upgraded VMFS volumes fail to mount on ESXi 4.x hosts
Snapshots of VMFS3 volumes upgraded from VMFS2 with block size greater than 1MB might fail to mount on ESXi 4.x hosts. The esxcfg-volume -l command to list the detected VMFS snapshot volumes fail with the following error message:
~ # esxcfg-volume -l
Error: No filesystem on the device
This issue is resolved in this release. Now you can mount or re-signature snapshots of VMFS3 volumes upgraded from VMFS2.
- ESXi host fails with Found invalid object error
An ESXi host might fail with a purple diagnostic screen that displays error messages similar to the following:
[7m21:04:02:05.579 cpu10:4119)WARNING: Fil3: 10730: Found invalid object on 4a818bab-b4240ea4-5b2f-00237de12408 expected
21:04:02:05.579 cpu10:4119)FSS: 662: Failed to get object f530 28 2 4a818bab b4240ea4 23005b2f 824e17d 4 1 0 0 0 0 0 :Not found
21:04:02:05.579 cpu0:4096)VMNIX: VMKFS: 2521: status = -2
This issue occurs when a VMFS volume has a corrupt address in the file descriptor.
This issue is resolved in this release.
- Rescan operations take a long time or times out with read-only VMFS volume
Rescan or add-storage operations that you run from the vSphere Client might take a long time to complete or fail due a timeout, and messages similar to the following are written to /var/log/vmkernel:
Jul 15 07:09:30 [vmkernel_name]: 29:18:55:59.297 <cpu id>ScsiDeviceToken: 293: Sync IO 0x2a to device "naa.60060480000190101672533030334542" failed: I/O error H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
Jul 15 07:09:30 [vmkernel name]: 29:18:55:59.298 cpu29:4356)NMP: nmp_CompleteCommandForPath: Command 0x2a (0x4100b20eb140) to NMP device "naa.60060480000190101672533030334542" failed on physical path "vmhba1:C0:T0:L100" H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
Jul 15 07:09:30 [vmkernel_name]: 29:18:55:59.298 cpu29:4356)ScsiDeviceIO: 747: Command 0x2a to device "naa.60060480000190101672533030334542" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
VMFS continues trying to mount the volume even if the LUN is read-only.
This issue is resolved in this release. From this release, VMFS does not attempt to mount the volume when it receives the read-only status.
- Target information for some LUNs is missing in the vCenter Server UI
Target information for LUNs is sometimes not displayed in the vCenter Server UI. In releases earlier than ESXi 4.0 Update 3, some iSCSI LUNs do not show the target information.To view this information in the Configuration tab, perform the following steps:
- Click Storage Adapters under Hardware.
- Click iSCSI Host Bus Adapter in the Storage Adapters pane.
- Click Paths in the Details pane.
-
ESXi hosts using QLogic HBAs become unresponsive due to heap memory depletion
When using QLogic HBAs, ESXi 4.x hosts might become unresponsive due to heap memory depletion.
The hosts are disconnected in vCenter Server and are inaccessible through SSH or vSphere Client. Error messages similar to the following are written to the VMkernel log:
vmkernel: 17:12:38:35.647 cpu14:6799)WARNING: Heap: 1435: Heap qla2xxx already at its maximumSize. Cannot expand.
vmkernel: 17:12:38:35.647 cpu14:6799)WARNING: Heap: 1645: Heap_Align(qla2xxx, 96/96 bytes, 64 align) failed. caller: 0x418011ea149b
This issue is resolved in this release.
- VMFS logs misleading error messages
VMFS volumes might log misleading error messages similar to following in the VMkernel log file, which indicate disk corruption instead of benign uninitialized log buffer:
Aug 4 21:45:43 [host name] vmkernel: 114:02:53:33.345 cpu9:21627)FS3: 3833: FS3DiskLock for [type bb9c7cd0 offset 13516784692132593920 v 13514140778984636416, hb offset 16640
Aug 4 21:45:43 [host name] vmkernel: gen 0, mode 16640, owner 00000006-4cd3bbfe-fece-e61f133cdd37 mtime 35821792] failed at 60866560 on volume [volume name]
This issue is resolved in this release.
-
Connecting certain USB storage devices might cause an ESXi host to fail
Connecting certain USB storage devices might cause an ESXi host to fail with a purple screen and display an error message similar to the following:
@BlueScreen: #UD Exception(6) in world 1037009:usb-storage @ 0x41803a844eb6 Code starts at 0x41803a400000 0x4100c168fe90:[0x41803a844eb6]usb_stor_invoke_transport+0x73d stack: 0x10 0x4100c168feb0:[0x41803a842158]usb_stor_transparent_scsi_command+0x1b stack: 0x7d41af17b64564
0x4100c168ff30:[0x41803a8467b9]usb_stor_control_thread+0x7c4 stack: 0x4100b0fdbaa0
0x4100c168ff60:[0x41803a7c51f2]kthread+0x79 stack: 0x41000000000e
0x4100c168ffa0:[0x41803a7c2b62]LinuxStartFunc+0x51 stack: 0xe
0x4100c168fff0:[0x41803a49870b]vmkWorldFunc+0x52 stack: 0x0
ESXi now detects these protocol violations and handles them.
-
ESXi host becomes unresponsive if one of the mirrored installation drives is removed from the server
An ESXi host might become unresponsive if you unexpectedly remove one of the mirrored installation drives from the server that is connected to an LSI SAS controller. The ESXi host shows messages similar to the following:
[329125.915302] sd 0:0:0:0: still retrying 0 after 360 s
[329175.056594] sd 0:0:0:0: still retrying 0 after 360 s
[329226.201904] sd 0:0:0:0: still retrying 0 after 360 s
[329276.339208] sd 0:0:0:0: still retrying 0 after 360 s
[329326.478513] sd 0:0:0:0: still retrying 0 after 360 s
This issue is resolved in this release.
-
ESXi host fails with a purple screen and displays a Unhandled Async_Token ENOMEM Condition error
Due to memory allocation failure on a system that is under memory constraints when allocating Async_Token for handling I/O, the ESXi host fails.
This release minimizes the occurrence of this issue.
Upgrade and Installation
- Cannot revert to snapshots created on ESXi 3.5 hosts
ESXi hosts cannot revert to a previous snapshot after you upgrade from ESXi 3.5 Update 4 to ESXi 4.0 Update 3. The following message might be displayed in vCenter Server when you attempt such an operation: The features supported by the processor(s) in this machine are different from the features supported by the processor(s) in the machine on which the checkpoint was saved. Please try to resume the snapshot on a machine where the processors have the same features. This issue might occur when you create virtual machines on ESXi 3.0 hosts, perform vMotion and suspend virtual machines on ESXi 3.5 hosts, and resume them on ESXi 4.x hosts.
In this release, the error message does not appear. You can revert to snapshots created on ESXi 3.5 hosts, and resume the virtual machines on ESXi 4.x hosts.
-
This release provides an updated version of PVSCSI driver which enables you to install Windows XP guest operating system
Virtual Machine Management
- Devices that are not hot-removable from the vSphere Client can be removed on a Fault Tolerance-enabled virtual machine
When Fault Tolerance is enabled, you should not be able to hot-remove devices such as NICs and SCSI controllers from the vSphere Client when the virtual machine is running. However these appear as removable devices in the Windows system tray of the virtual machine and can be removed from within the guest operating system.
This issue is resolved in this release. Now you cannot remove devices from the virtual machine's system tray when Fault Tolerance is enabled.
- Snapshot delta files are not deleted when snapshots are created in a custom directory
All snapshots are created in a default virtual machine directory. However, if you specify a custom directory for snapshots, snapshot delta files might remain in this directory when you delete snapshots. These redundant files eventually fill up disk space and need to be deleted manually.
This issue is resolved in this release.
- Stopping VMware Tools while taking a quiesced snapshot of a virtual machine causes hostd to fail
This issue is resolved in this release. Now the quiesced snapshot operation fails gracefully if you stop VMware Tools.
- Virtual machines fail to power on in some cases even when swap space exists on ESXi hosts
When you power on virtual machines running on an ESXi 4.0 host, the operation fails and logs an Insufficient COS swap to power on error message in /var/log/vmware/hostd.log even though the machine has free space available. After installing this update, you can power on virtual machines.
- ESXi host might become unresponsive during a RevertSnapshot or RevertToCurrentSnapshot operation
An ESXi host might become unresponsive when you perform a RevertSnapshot or RevertToCurrentSnapshot operation from the vCenter Server. The following error message is logged in hostd.log:
Apr 22 08:46:26 Hostd: [2010-04-22 08:46:26.381 'App' 49156 error] Caught signal 11
This issue is resolved in this release.
- Virtual machine fails during vMotion
If the NFS volume hosting a virtual machine encounters errors, the NVRAM file of the virtual machine might become corrupted and grow in size from the default 8KB up to a few gigabytes. At this time, if you perform a vMotion or a suspend operation, the virtual machine fails with an error message similar to the following:
unrecoverable memory allocation failures at bora/lib/snapshot/snapshotUtil.c:856
This issue is resolved in this release.
- Virtual machine command issued through hostd immediately after a virtual machine powers off fails
A virtual machine command such as PowerOn that you issue through hostd immediately after a virtual machine powers off might fail with an error message similar to the following:
A specified parameter was not correct
An error message similar to the following might be written to the vCenter log:
[2009-11-16 15:06:09.266 01756 error 'App'] [vm.powerOn] Received unexpected exception
This issue is resolved in this release.
- Virtual machines configured with CPU limits running on ESXi 4.x hosts experience performance degradation (KB 1030955)
- Virtual machine sometimes powers off while creating or deleting snapshots
While performing snapshot operations, if you simultaneously perform another task such as browsing a datastore, the virtual machine might be abruptly powered off. Error messages similar to the following are written to vmware.log:
vmx| [msg.disk.configureDiskError] Reason: Failed to lock the file
vmx| Msg_Post: Error
vmx| [msg.checkpoint.continuesync.fail] Error encountered while restarting virtual machine after taking snapshot. The virtual machine will be powered off.
The issue occurs when a file required by the virtual machine for one operation is accessed by another process.
This issue is resolved in this release.
- Committing a snapshot from the command line fails with a backtrace
When running the vmware-cmd <vmx> removesnapshots command, you might see a backtrace similar to the following:
Traceback (most recent call last):
File "/usr/bin/vmware-cmd", line 88, in ?
main()
File "/usr/bin/vmware-cmd", line 63, in main
operationName, result = CommandProcessor.Process(host, args)
File "/usr/lib/vmware/vmware-cmd/CommandProcessor.py", line 11, in Process
result = operation.DoIt(*processedArgs)
File "/usr/lib/vmware/vmware-cmd/operations/SnapshotOps.py", line 122, in DoIt
The vmware-cmd <vmx> removesnapshots command fails because the <vm_name>-aux.xml file located in the same directory as the virtual machine configuration file is empty. When a virtual machine is created or registered on a host, the contents of the <vm_name>-aux.xml file is read and the _view object is populated. If the XML file is empty the _view object is not populated. This results in an error when consolidating the snapshot.
This issue is resolved in this release.
- Memory hot-add fails if the assigned virtual machine memory equals the size of its memory reservation
An error message similar to the following appears in the vSphere Client:
Hot-add of memory failed. Failed to resume destination VM: Bad parameter. Hotplug operation failed
Messages similar to the following are written to the /var/log/vmkernel log file:
WARNING: FSR: 2804: 1270734344 D: Received invalid swap bitmap lengths: source 0, destination 32768! Failing migration.
WARNING: FSR: 3425: 1270734344 D: Failed to transfer swap state from source VM: Bad parameter
WARNING: FSR: 4006: 1270734344 D: Failed to transfer the swap file from source VM to destination VM.
This issue occurs if Fast Suspend Resume (FSR) fails during the hot-add operation.
This issue is resolved in this release.
vMotion and Storage vMotion
VMware Tools
- Perfmon does not list virtual machine performance counters after VMware Tools installation
After you install VMware Tools, VM Memory and VM Processor might not appear in the performance counters list in the Windows Performance Monitor (Perfmon) when another process accesses the file that is required by the virtual machine for an operation. Performing an upgrade or repair of VMware Tools does not resolve this problem.
After you install this update release, you can upgrade or repair VMware Tools to resolve the issue.
- Creation of quiesced snapshots might not work on non-English versions of Microsoft Windows guest operating systems
The issue occurs when a Windows folder path contains non-ASCII characters (for example, in the case of the application-data folder in Czech Windows guest operating systems). The presence of non-ASCII characters causes the snapshot operation to fail.
This issue is resolved in this release.
- VMware Control Panel UI button in Windows Control Panel is disabled for performing VMware Tools upgrade
The VMware Control Panel UI button in Windows Control Panel for performing VMware Tools upgrade from a Windows guest operating system is disabled for non-administrator users. Also, Shrink and Scripts options in the VMware Tools Control Panel are disabled for non-administrator users. This fix is only a UI change and does not block upgrades from custom applications. To block VMware Tools upgrades for all users, set the isolation.tools.autoinstall.disable="TRUE" parameter in the VMX file.
- Configuration file entries are overwritten on Linux virtual machines while installing VMware Tools
When you install or update VMware Tools on Linux virtual machines, the VMware Tools installer might overwrite any entries in the configuration files (such as in /etc/updatedb.conf file for Redhat and Ubuntu, and in /etc/sysconfig/locate for SuSE) made by third-party development tools. This might affect cron jobs running updatedb on these virtual machines.
This issue is resolved in this release.
- Creation of quiesced snapshots might fail on some non-English versions of Windows guest operating systems
Quiesced snapshots might fail on some non-English versions of Windows guest operating systems, such as French versions of Microsoft Windows Server 2008 R2 and Microsoft Windows 7 guest operating systems. This issue occurs because the VMware Snapshot Provider service does not get registered as a Windows service or as a COM+ application properly on some non-English versions of Microsoft Windows guest operating systems. This issue causes the snapshot operation to fail, and as a result, no snapshot is created.
This issue is resolved in this release.
- Extraneous errors are displayed when restarting Linux virtual machines after installing VMware Tools
After you install VMware Tools for Linux and restart the guest operating system, the device manager for the Linux kernel (udev) might report extraneous errors similar to the following:
May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'SUBSYSTEMS'
May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{vendor}'
May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{model}'
May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'SUBSYSTEMS'
May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{vendor}'
May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'AT
This issue is resolved in this release. Now the VMware Tools Installer for Linux detects the device and only writes system-specific rules.
- Network connectivity fails on a Windows virtual machine after VMware Tools upgrade if HGFS driver is not uninstalled properly
When you upgrade VMware Tools with HGFS installed from ESX 3.5 to ESX 4.0, the HGFS driver might not be uninstalled properly. As a result, the Windows virtual machine's network Provider Order tab at Network Connections > Advanced > Advanced Settings displays incorrect information, and the virtual machine might lose network connectivity.
This issue is resolved in this release. Now the earlier version of the HGFS driver and all related registry entries are uninstalled properly during upgrade.
- Updates PVSCSI driver
In this release, the PVSCSI driver is updated to version 1.0.7.0 for
Windows XP (32/64-bit), Windows Server 2003 (32/64-bit), Windows Vista (32/64-bit), Windows Server 2008 RTM (32/64-bit), Windows 7 (32/64-bit), and Windows Server 2008 R2 (64-bit) guest operating systems.
- Certain guest operating systems encounter unsolved symbol __stack error when you are installing VMware Tools
While installing VMware Tools on certain guest operating systems such as RHEL 3, you might see an error message similar to the following:
Symbol __stack_chk_fail from module /usr/X11R6/lib/modules/drivers/vmware_drv.o is unresolved!
This issue is resolved in this release.
-
Automatic VMware Tools upgrade does not work if hardware acceleration is set to None on a Windows virtual machine
If hardware acceleration is set to None on a Windows virtual machine, even if you configure it with the Check and upgrade Tools before each power-on option, VMware Tools is not upgraded immediately after you restart the virtual machine. VMware Tools is upgraded only when you respond to the hardware acceleration dialog box that the virtual machine displays after you log in.
This issue is resolved in this release.
Top of Page
Known Issues
This section describes known issues in this release in the following subject areas:
Known issues not previously documented are marked with the * symbol.
Backup
-
VMware Consolidated Backup (VCB) 1.5 Update 1 with Windows 7 and Windows 2008 R2 x64
VMware Consolidated Backup (VCB) 1.5 Update 1 supports full virtual machine backup and restore of Windows 7 and Windows 2008 R2 x64 guest operating systems. However, file level backup is not supported on Windows 7 or Windows 2008 R2 x64 guest operating systems.
-
VMware Consolidated Backup (VCB) is not supported with Fault Tolerance
A VCB backup performed on an FT-enabled virtual machine powers off both the primary and the secondary virtual machines and might render the virtual machines unusable.
Workaround: None
CIM and API
-
Some power supply VRM sensors are not displayed in vCenter Hardware status tab for IBM x3850 and x3950 M2 servers
In vCenter Server Hardware status tab, sensors are not displayed for all the states of PS VRM sensor in the Hardware status tab for IBM x3850 and x3950 M2 servers. The CIM instances are not created corresponding to each state of the Power Supply VRM sensor on IBM x3850 and x3950 M2 servers. This is due to a defect in the IBM BMC firmware 4.5. Hence, the sensors are not displayed in vCenter Hardware status tab.
-
Incorrect version number listed in Small Footprint CIM Broker
The Small Footprint CIM Broker version listed from SLP service is incorrect. This release contains SFCB 1.3.3 version, but in the SLP query information, the version is listed as 1.3.0. This incorrect version number does not impact the usage of SLP service. Currently, there is no workaround for this issue.
-
CIM Indication subscription is lost after an ESXi update
CIM Indication subscription is lost when upgrading between ESXi updates or when applying patches. The information regarding where to send the indication is overwritten by the upgrade and, therefore, lost.
Workarounds: Either of the following workarounds can be effective. Employ the method that best suits your deployment.
- Re subscribe the CIM indications
You might not be able to employ this workaround. Sometimes re subscribing the CIM indications is not an option.
- Copy the appropriate files from the backup repository to the new repository as described in the sub steps that follow.
This workaround recovers the CIM XML indication subscriptions.
- Move the following files from the back up repository to the new repository:
cim_indicationfilter
cim_indicationfilter.idx
cim_indicationhandlercimxml
cim_indicationhandlercimxml.idx
cim_indicationsubscription
cim_indicationsubscription.idx
cim_listenerdestinationcimxml
cim_listenerdestinationcimxml.idx
.
For example, move the preceding files from the backup repository, such as /var/lib/sfcb/registration/repository.previous/root/interop, to the new repository, such as
/var/lib/sfcb/registration/repository/root/interop
- Restart the sfcbd-watchdog process.
Guest Operating System
-
Solaris 10 U4 virtual machine becomes non responsive during VMware Tools upgrade
Upgrading or restarting VMware Tools in a Solaris 10 U4 virtual machine with an advanced vmxnet adapter might cause the guest operating system to become non responsive and the installation to be unable to proceed.
Solaris 10 U5 and later versions are not affected by this issue.
Workaround: Before installing or upgrading VMware Tools, temporarily reconfigure the advanced vmxnet adapter by removing its auto configuration files in /etc/ or removing the virtual hardware.
-
Devices attached to hot-added BusLogic adapter are not visible to Linux guest
Devices attached to hot-added BusLogic adapter are not visible to a Linux guest if the guest previously had another BusLogic adapter. In addition, hot removal of the BusLogic adapter might fail. This issue occurs because the BusLogic driver available in Linux distributions does not support hot plug APIs. This problem does not affect performing hot add of disks to the adapter, only performing hot add of the adapter itself.
Workaround: Use a different adapter, such as a parallel or SAS LSI Logic adapter, for hot add capabilities. If a BusLogic adapter is required, attempt a hot remove of the adapter after unloading the BusLogic driver in the guest. You can also attempt to get control of the hot-added adapter by loading another instance of the BusLogic driver. You can load another instance of the BusLogic adapter by running the command modprobe -o BusLogic1 BusLogic (where you replace BusLogic1 with BusLogic2, BusLogic3 for BusLogic2, and so on, for every hot add operation).
-
Virtual machines with WindowsNT guests require a response to a warning message generated when the virtual machine attempts to automatically upgrade VMware Tools
If you set the option to automatically check and upgrade VMware Tools before each power-on operation for WindowsNT guests, the following warning message appears:
Set up failed to install the vmxnet driver Automatically, This driver will have to be installed manually
Workaround: The upgrade stops until the warning is acknowledged. To complete the upgrade, log into the WindowsNT guest and acknowledge the warning message.
-
Creating a virtual machine of Ubuntu 7.10 Desktop can result in the display of a black screen
When you run the installation for the Ubuntu 7.10 Desktop guest on a virtual machine with paravirtualization enabled on an AMD host, the screen of the virtual machine might remain blank. The correct behavior is for the installer to provide instructions for you to remove the CD from the tray and press return.
Workaround: Press the return key. The installation proceeds to reboot the virtual machine. Furthermore, this issue does not occur if you start the installation on a virtual machine with two or more virtual processors.
-
An automatic VMware Tools upgrade might fail for a Red Hat Enterprise Linux 5.x virtual machine
An automatic VMware Tools upgrade might fail with the Error upgrading VMware Tools error for a Red Hat Enterprise Linux 5.x virtual machine cold migrated from an ESXi 3.0.3 host to an ESX/ESXi 4.0 Update 1 host.
Workaround: Manually upgrade VMware Tools on the ESXi 4.0 Update 1 host.
Internationalization
Licensing
Miscellaneous
-
Stopping or restarting the vCenter Server service through the Windows Services Control MMC plug-in might lead to an error message
Under certain circumstances, the vCenter Server service might take longer than usual to start. Stopping and restarting the vCenter Server service through the Windows Services Control MMC plug-in might lead to the following error message:
Service failed to respond in a timely manner.
This message indicates that the time required to shut down or start up vCenter Server was more than the configured system-wide default timeout for starting or stopping the service.
Workaround: Refresh the Services Control screen after a few minutes, which should show that the service has been correctly stopped and restarted.
Networking
-
NetXen chipset does not have hardware support for VLANs
The NetXen NIC does not display Hardware Capability Support for VMNET_CAP_HW_TX_VLAN and VMNET_CAP_HW_RX_VLAN. This occurs because NetXen chipset does not have hardware support for VLANs. NetXen VLAN support is available in software.
-
The Custom creation of a virtual machine allows a maximum of four NICs to be added
During the creation of a virtual machine using the Custom option, vSphere Client provides the Network configuration screen. On that screen, you are queried about the number of NICs that you would like to connect. The drop down menu allows up to four NICs only. However, 10 NICs are supported on ESX/ESXi 4.0 Update 1.
Workaround: Add more NICs with the task that follows.
- Using the vSphere Client, navigate to Home>Inventory>VMs and Templates.
- With the Getting Started tab selected, click Edit virtual machine settings.
- Click Add.
- Select Ethernet Adapter and click Next.
- Continue selecting the appropriate settings for your specific scenario.
-
The VmwVmNetNum of VM-INFO MIB displays as Ethernet0 when running snmpwalk
When snmpwalk is run for VM-INFO MIB on an ESX/ESXi host, the VmwVmNetNum of VM-INFO MIB is displayed as Ethernet0 instead of Network Adapter1 while the MOB URL in the VmwVmNetNum of VM-INFO description displays as Network Adapter1.
Workaround: None
-
Applications that use VMCI Sockets might fail after virtual machine migration
If you have applications that use Virtual Machine Communication Interface (VMCI) Sockets, the applications might fail after virtual machine migration if the VMCI context identifiers used by the application are already in use on the destination host. In this case, VMCI stream or Datagram sockets that were created on the originating host stop functioning properly. It also becomes impossible to create new stream sockets.
Workaround: For Windows guest operating systems, reload the guest VMCI driver by rebooting the guest operating system or enabling the device through the device manager. For Linux guests, shut down applications that use VMCI Sockets, remove and reload the vsock kernel module and restart the applications.
-
Applying port groups with multiple statically assigned VMKNICs or VSWIFs results in repeated prompts for an IP address
Applying a vDS port group with multiple statically assigned VMKNICs or VSWIFs causes a situation in which the user is repeatedly prompted to enter an IP address. DHCP assigned interfaces are not affected.
Workaround: Use only one statically assigned VMKNIC or VSWIF per port group. If multiple statically assigned VMKNICs are desired on the same vDS port group, then assign each VMKNIC or VSWIF to a unique set of services (for example, vMotion, Fault Tolerance, and other services).
-
The Retrieval of DNS and host name information from the DHCP server might be delayed or prevented
-
When ESX 3.5 hosts are upgraded to ESX 4.0 some of the network device load the e1000e driver instead of e1000 *
When ESX 3.5 Update 4 or ESX 3.5 Update 5 hosts are upgraded to ESX 4.0 or later, the following network devices have the e1000e driver loaded instead of e1000:
- Intel 82571EB Gigabit Ethernet Controller (including 105e, 105f, 1060, 10a4, 10a5, 10bc)
- Intel 82572EI Gigabit Ethernet Controller (including 107d, 107e, 107f, 10b9)
- Intel 82573V Gigabit Ethernet Controller (including 108b)
- Intel 82573E Gigabit Ethernet Controller (including 108c)
- Intel 80003ES2LAN Gigabit Ethernet Controller (including 1096, 1098, 10ba, 10bb)
- Intel 82573L Gigabit Ethernet Controller (including 109a)
-
Changing the network settings of an ESX/ESXi host prevents some hardware health monitoring software from auto-discovering it
After the network settings of an ESX/ESXi host are changed, the third-party management tool that relies on the CIM interface (typically hardware health monitoring tools) are unable to discover automatically the host through the Service Location Protocol (SLP) service.
Workaround: Manually enter the hostname or IP address of the host in the third-party management tool. Alternatively, restart slpd and sfcbd-watchdog by using the applicable method:
On ESXi:
- Enter the Technical Support Mode.
- Restart slpd by running the /etc/init.d/slpd restart command.
- Restart sfcbd-watchdog by running the /etc/init.d/sfcbd-watchdog restart command.
Restart management agents on the Direct Console User Interface (DCUI). This restarts other agents on the host in addition to the ones impacted by this defect and might be more disruptive.
On ESX: In the ESX service console, run the following commands:
/etc/init.d/slpd restart
/etc/init.d/sfcbd-watchdog restart
Server Configuration
-
Host profiles do not capture or duplicate physical NIC duplex information
When you create a new host profile, the physical NIC duplex information is not captured. This is the intended behavior. Therefore, when the reference host's profile is used to configure other hosts, the operation negotiates the duplex configuration on a per physical NIC basis. This provides you with the capability to generically handle hosts with a variety of physical NIC capabilities.
Workaround: To set the physical NIC duplex value uniformly across NICs and hosts that are to be configured using the reference host profile, modify the host profile after it is created and reapply the parameters.
To edit the profile, follow the steps below.
- On the vSphere Client Home page, click Host Profiles.
- Select the host profile in the inventory list, then click the Summary tab and click Edit Profile.
- Select Network configuration > Physical NIC configuration > Edit.
- Select Fixed physical NIC configuration in the drop-down menu and enter the speed and duplex information.
Storage
-
Adding a QLogic iSCSI adapter to an ESX/ESXi system fails if an existing target with the same name but a different IP address exists
Adding a static target for QLogic hardware iSCSI adapter fails if there is existing target with same iSCSI name,
even if the IP address is different.
You can add a QLogic iSCSI adapter to an ESX/ESXi system only with a unique iSCSI name for a target, not the combination of IP and iSCSI name. In addition, the driver and firmware do not support multiple sessions to the same storage end point.
Workaround: None. Do not use the same iSCSI name when you add targets.
-
On rare occasions, after repeated SAN path failovers, operations that involve VMFS changes might fail for all ESX/ESXi hosts accessing a particular LUN
On rare occasions, after repeated path failovers to a particular SAN LUN, attempts to perform such operations as VMFS datastore creation, vMotion, and so on might fail on all ESX/ESXi hosts accessing this LUN. The following warnings appear in the log files of all hosts:
I/O failed due to too many reservation conflicts.
Reservation error: SCSI reservation conflict
If you see the reservation conflict messages on all hosts accessing the LUN, this indicates that the problem is caused by the SCSI reservations for the LUN that are not completely cleaned up.
Workaround: Run the following LUN reset command from any system in the cluster to remove the SCSI reservation:
vmkfstools -L lunreset /vmfs/devices/disks/
-
NAS datastores report incorrect available space
When you view the available space for an ESX/ESXi host by using the df (ESXi) or vdf (ESX) command in the host service console, the space reported for ESX/ESXi NAS datastores is free space, not available space. The space reported for NFS volumes in the Free column when you select Storage > Datastores on the vSphere Client Configuration tab, also reports free space, and not the available space. In both cases, free space can be different from available space.
ESX file systems do not distinguish between free blocks and available blocks, but always report free blocks for both block types (specifically, f_bfree and f_bavail fields of struct statfs). For NFS volumes, free blocks and available might can be different.
Workaround: You can check NFS servers to get correct information regarding available space. No workarounds are available for ESX/ESXi.
-
Harmless warning messages concerning region conflicts are logged in the VMkernel logs for some IBM servers
When the SATA/IDE controller works in legacy PCI mode in the PCI config space, an error message similar to the following might appear in the VMkernel logs:
WARNING: vmklinux26: __request_region: This message has repeated 1 times: Region conflict @ 0x0 => 0x3
Workaround: Such error messages are harmless and can be safely ignored.
-
When the storage processor of the HP MSA2012fc storage array is reset, critical alerts are erroneously issued *
Resetting the storage processor of the HP MSA2012fc storage array causes the ESX/ESXi native multipath driver (NMP) module to send alerts or critical entries to vmkernel logs. These alert messages indicate that the physical media has changed for the device. However, these messages do not apply to all LUN types. They are only critical for data LUNs but do not apply to management LUNs.
Workaround: No workaround. In this scenario, you can safely ignore alerts logged in reference to management LUNs.
-
A virtual machine can go into an endless loop of resetting SCSI LUNs, which prevents the virtual machine from being shut down *
When SCSI drivers (either BusLogic or LSI Logic) of a virtual machine resets its LUNs for any reason, the reset can go into endless loop.
Attempts to kill the virtual machine are not successful.
-
Service console commands might provide misleading information about Cisco UCS Qlogic FCoE controllers
On Cisco Unified Computing System (UCS) systems with a Qlogic FCoE controller, service console commands esxcfg-scsidevs -a and lspci might not identify the controller as a Qlogic FCoE controller, but instead specify the controller as a Fibre Channel controller.
For example, the output of the following service console commands does not identify the Cisco UCS Qlogic FCoE controllers specifically as FCoE controllers.
- The lspci command for Cisco UCS Qlogic FCoE Controllers:
04:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
04:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
- The esxcfg-scsidevs -a command for Cisco UCS Qlogic FCoE Controllers:
vmhba1 qla2xxx link-up fc.20010025b500000a:20000025b500001a (0:4:0.0) QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA
vmhba2 qla2xxx link-up fc.20010025b500000a:20000025b500000a (0:4:0.1) QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA
Supported Hardware
-
VMware ESX might fail to boot on Dell 2900 servers
If your Dell 2900 server has a version of BIOS earlier than 2.1.1, ESX VMkernel might stop responding while booting. This is due to a bug in the Dell BIOS, which is fixed in BIOS version 2.1.1.
Workaround: Upgrade BIOS to the version 2.1.1 or later.
- VMware ESXi Embedded fails to boot on HP DL385 G2 servers when BIOS uses USB 1.1 controller
VMware ESXi Embedded systems do not recognize the USB 1.1 controller on HP DL385 G2 servers. As a result, the ESXi system fails to boot. This problem always occurs on HP DL385 G2 servers when the BIOS is set to use the USB 1.1 Controller.
Workaround: During the boot phase of an ESXi Embedded system, enable the USB 2.0 controller in the BIOS settings. In some installations, this controller appears as V1.1+V2.0.
-
No CIM indication alerts are received when the power supply cable and power supply unit are reinserted into HP servers
No new SEL(IML) entries are created for power supply cable and power supply unit reinsertion into HP servers when recovering a failed power supply. As a result, no CIM indication alerts are generated for these events.
Workaround: None
-
Slow performance during virtual machine power-On or disk I/O on ESX/ESXi on the HP G6 Platform with P410i or P410 Smart Array Controller
Some of these hosts might show slow performance during virtual machine power on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to /var/log/messages.
Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL!
Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL!
Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060600) to NMP device
"naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
Mar 25 17:39:26 0 0x0 0x0.
Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)WARNING: NMP: nmp_DeviceRetryCommand: Device
"naa.600508b1001030304643453441300100": awaiting fast path state update for failoverwith I/O blocked. No prior reservation
exists on the device.
Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060700) to NMP device
"naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
Mar 25 17:39:26 0 0x0 0x0.
Workaround: Install the HP 256MB P-series Cache Upgrade module.
-
On certain versions of vSphere Client, the battery status might be incorrectly listed as an alert
In vSphere Client from the Hardware Status tab, when the battery is in its learn cycle, the battery status provides an alert message indicating that the health state of the battery is not good. However, the battery level is actually fine.
Workaround: None.
-
A "Detected Tx Hang" message appears in the VMkernel log
Under a heavy load, due to hardware errata, some variants of e1000 NICs might lock up. ESX/ESXi detects the issue and automatically resets the card. This issue is related to Tx packets, TCP workloads, and TCP Segmentation Offloading (TSO).
Workaround: You can disable TSO by setting the /adv/Net/UseHwTSO option to 0 in the esx.conf file.
Upgrade and Installation
-
vSphere Host Update Utility displays incorrect error message while upgrading from ESXi 3.5.x to ESXi 4.0.x *
When you upgrade from ESXi 3.5.x to ESXi 4.0.x by using the vSphere Host Update Utility, the Utility might show an error message similar to the following:
Attempted to read or write protected memory. This is often an indication that other memory is corrupt
Workaround: None. You can ignore this error message.
-
The vCenter Server system's Database Upgrade wizard might overestimate the disk space requirement during an upgrade from VirtualCenter 2.0.x to vCenter Server 4.0
During the upgrade of VirtualCenter 2.0.x to vCenter Server 4.0, the Database Upgrade wizard can show an incorrect value in the database disk space estimation. The estimation shown is typically higher than the actual space required.
Workaround: None
-
vSphere Client installation might fail with Error 1603 if you do not have an active Internet connection
You can install the vSphere Client in two ways: from the vCenter Server media or by clicking a link on the ESX, ESXi, or vCenter Server Welcome screen. The installer on the vCenter Server media (.iso file or .zip file) is self-contained, including a full .NET installer in addition to the vSphere Client installer. The installer called through the Welcome screen includes a vSphere Client installer that makes a call to the Web to get .NET installer components.
If you do not have an Internet connection, the second vSphere Client installation method will fail with Error 1603 unless you already have .NET 3.0 SP1 installed on your system.
Workaround: Establish an Internet connection before attempting the download, install the vSphere Client from the vCenter Server media, or install .NET 3.0 SP1 before clicking the link on the Welcome screen.
-
If SQL Native Client is already installed, you cannot install vCenter with the bundled SQL Server 2005 Express database
When you are installing vCenter with the bundled SQL Server 2005 Express database, if SQL Native Client is already installed, the installation fails with the following error message:
An Installation package for the product Microsoft SQL Native Client cannot be found. Try the installation using a valid copy of the installation package sqlcli.msi.
Workaround: Uninstall SQL Native Client if it is not used by another application. Then, install vCenter with the bundled SQL Server 2005 Express database.
-
vSphere Client 4.0 download times out with an error message when you connect VI Client 2.0.x on a Windows 2003 machine to vCenter Server or an ESX/ESXi host
If you connect a VI Client 2.0.x instance to vCenter Server 4.0 or an ESX/ESXi 4.0 host, vSphere Client 4.0 is automatically downloaded onto the Windows machine where the VI Client resides. This operation relies on Internet Explorer to perform this download. By default, Internet Explorer on Windows 2003 systems blocks the download if the VI Client instance is VI Client 2.0.x.
Workaround: In Internet Explorer, select Tools > Internet Options > Advanced and uncheck the option Do not save encrypted pages to disk. Alternatively, download and install vSphere Client 4.0 manually from vCenter Server 4.0 or the ESX/ESXi 4.0 host.
-
vCenter Server database upgrade fails for Oracle 10gR2 database with certain user privileges
If you upgrade VirtualCenter Server 2.x to vCenter Server version 4.0 and you have connect, create view, create any sequence, create any table, and execute on dbms_lock privileges on the database (Oracle 10gR2), the database upgrade fails. The VCDatabaseUpgrade.log file shows following error:
Error: Failed to execute SQL procedure. Got exception: ERROR [HY000] [Oracle][ODBC][Ora]ORA-01536: space quota exceeded for tablespace 'USERS'
Workaround: As database administrator, enlarge the user tablespace or grant the unlimited tablespace privilege to the user who performs the upgrade.
-
vCenter Server installation fails on Windows Server 2008 when using a non system user account
When you specify a non-system user during installation, vCenter Server installation fails with the following error message:
Failure to create vCenter repository
Workaround: On the system where vCenter Server is being installed, turn off the User Account Control option under Control Panel > User Accounts before you install vCenter Server. Specify the non-system user during vCenter Server installation.
-
Cannot log in to VirtualCenter Server 2.5 after installing VI Client 2.0.x, 2.5, and vSphere Client 4.0 and then uninstalling VI Client 2.0.x on a Windows Vista system
After you uninstall the VI Client 2.0.x on a Windows Vista machine where the VI Client 2.0.x, 2.5, and the vSphere Client 4.0 coexist, you cannot log in to vCenter Server 2.5. Login fails with the following message:
Class not registered(Exception from HRESULT:0x80040154(REGDB_E_CLASSNOTREG))
Workaround: Disable the User Account Control setting on the system where VI Client 2.0.x, 2.5, and vSphere Client 4.0 coexist,or uninstall and reinstall VI Client 2.5.
-
The ESX/ESXi installer lists local SAS storage devices in the Remote Storage section
When displaying storage locations for ESX or ESXi Installable to be installed on, the installer lists a local SAS storage device in the Remote Storage section. This happens because ESX/ESXi cannot determine whether the SAS storage device is local or remote and always treats it as remote.
Workaround: None
-
If vSphere Host Update Utility loses its network connection to the ESX host, the host upgrade might not work
If you use vSphere Host Update Utility to perform an ESX/ESXi host upgrade and the utility loses its network connection to the host, the host might not be completely upgraded. When this happens, the utility might stop, or you might see the following error message:
Failed to run compatibility check on the host.
Workaround: Close the utility, fix the network connection, restart the utility, and rerun the upgrade.
-
vCenter Server installation on Windows Server 2008 with a remote SQL Server database fails in some circumstances
If you install vCenter Server on Windows 2008, using a remote SQL Server database with Windows authentication for SQL Server, and a domain user for the DSN that is different from the vCenter Server system login, the installation does not proceed and the installer displays the following error message:
25003.Setup failed to create the vCenter repository
Workaround: In these circumstances, use the same login credentials for vCenter Server and for the SQL Server DSN.
-
The Next run time value for some scheduled tasks is not preserved after you upgrade from VirtualCenter 2.0.2.x to vCenter Server 4.0
If you upgrade from VirtualCenter 2.0.2.x to vCenter Server 4.0, the Next run time value for some scheduled tasks might not be preserved and the tasks might run unexpectedly. For example, if a task is scheduled to run at 10:00 am every day, it might run at 11:30 am after the upgrade.
This problem occurs because of differences in the way that VirtualCenter 2.0.2.x and vCenter Server 4.0 calculate the next run time. You see this behavior only when the following conditions exist:
- You have scheduled tasks, for which you edited the run time after the tasks were initially scheduled so that they now have a different
Next run time.
- The newly scheduled
Next run time has not yet occurred.
Workaround: Perform the following steps:
- Wait for the tasks to run at their scheduled
Next run time before upgrading.
- After you upgrade from vCenter 2.0.x to vCenter Server 4.0, edit and save the scheduled task. This process recalculates the
Next run time of the task to its correct value.
-
Virtual machine hardware upgrades from version 4 to version 7 cause Solaris guests lose their network settings
Virtual machine hardware upgrades from version 4 to version 7 changes the PCI bus location of virtual network adapters in guests. Solaris does not detect the adapters and changes the numbering of its network interfaces (for example, e1000g0 becomes e1000g1). This numbering change occurs because Solaris IP settings are associated with interface names, so it appears that the network settings have been lost and the guest is likely not to have proper connectivity.
Workaround: Determine the new interface names after the virtual machine hardware upgrade by using the prtconf -D command, and then rename all the old configuration files to their new names. For example, e1000g0 might become e1000g1, so every /etc/*e1000g0 file should be renamed to its /etc/*e1000g1 equivalent.
-
The vCenter Server installer cannot detect service ports if the services are not running
When you install vCenter Server and accept the default ports, if those ports are being used by services that are not running, the installer cannot validate the ports. The installation fails, and an error message might appear, depending on which port is in use.
This problem does not affect IIS services. IIS services are correctly validated, regardless of whether the services are running.
Workaround: Verify which ports are being used for services that are not running before beginning the installation and avoid using those ports.
- Upgrades where two versions of ESXi co-exist on the same machine fail
Two versions of ESXi on the same machine is not supported. You must remove one of the versions. The following workarounds apply to the possible combinations of two ESXi versions on the same machine.
Workarounds:
- If ESXi Embedded and ESXi Installable are on the same machine and you choose to remove ESXi Installable and only use ESXi Embedded, follow the steps below.
- Make sure you can boot the machine from the ESX Embedded USB flash device.
- Copy the virtual machines from the ESXi Installable VMFS datastore to the ESXi Embedded VMFS datastore.
This is a best practice to prevent loss of data.
- Remove all partitions except for the VMFS partition on the disk with ESXi Installable installed.
- Reboot the machine and configure the boot setting to boot from the USB flash device.
- If ESXi Embedded and ESXi Installable are on the same machine and you choose to remove ESXi Embedded and only use ESXi Installable, follow the steps below.
- Boot the system from ESXi Installable.
- Reboot the machine and configure the boot setting to boot from the hard disk where you installed ESXi rather than the USB disk.
- If you can remove the ESXi Embedded USB device, remove it. If the USB device is internal, clear or overwrite the USB partitions.
- If two versions of ESXi Embedded or two versions of ESXi Installable are on the same machine, remove one of the installations.
-
Patch installation using vihostupdate fails on ESXi hosts for file sizes above 256MB
Patch installation fails on an ESXi 4.0 host if you install using vihostupdate command on a server which does not have a scratch directory configured, and the downloaded file size is above 256MB. The installation fails usually on a host machine which does not have an associated LUN, ESXi 4.0 installation on Fibre Channel, or Serial Attached SCSI (SAS) disk.
You should verify the scratch directory settings on the ESXi server and make sure that the scratch directory is configured. When the ESXi server boots initially, the system tries to auto configure the scratch directory. If storage is not available for the scratch directory, the scratch directory is not configured and points to a temporary directory.
Workaround
To work around the limitation on single file size, you should configure a scratch directory on a VMFS volume using the vSphere Client.
To configure the scratch directory:
- Connect to the host with vSphere Client.
- Select the host in the Inventory.
- Click the Configuration tab.
- Select Advanced Settings from the Software settings list.
- Find ScratchConfig in the parameters list and set the value for ScratchConfig.ConfiguredScratchLocation to a directory on a VMFS volume connected to the host.
- Click OK.
- Reboot the host machine to apply your changes to the host.
-
The vihostupdate command can fail on ESXi 4.0 hosts for which the scratch directory is not configured
Depending on the configuration of the scratch directory, bundle sizes for example the size of the ESXi 4.0 Update 1 bundle, might be too large for an ESXi 4.0 host. In such cases, when you perform an installation with vihostupdate, if the scratch directory is not configured to use disk-backed storage the installation fails.
Workaround: You can change the configuration of the scratch directory by using the VMware vSphere Client or the VMware Update Manager. The following steps illustrate the use of the client.
- Check the configuration of the scratch directory.
The following is the navigation path from vSphere Client:
Configuration>Advanced Settings>ScratchConfig
For an ESXi host the following applies:
- When the scratch directory is set to /tmp/scratch, the size of the bundle is limited. For example, you can apply a patch bundle of 163 MB, but you cannot apply an update bundle, such as an ESXi 4.0 update bundle of 281 MB.
- When the scratch directory is set to the VMFS volume path, / , you can apply bundles as large as an ESXi 4.0 bundle of 281 MB.
- Change the scratch directory to the appropriate setting using vSphere Client.
The following is the navigation path from vSphere Client: Configuration>Advanced Settings>ScratchConfig.
- Reboot the ESXi host for the edited settings to take effect.
- Issue the vihostupdate.pl command to install the bundle.
For example, you can issue a command such as the following, replacing the place holders as appropriate:
vihostupdate.pl --server --username root --password --bundle http:// .zip --install
-
When ESXi 3.5 is upgraded to ESXi 4.0 Update 3, the esxupdate query command does not show the installed bulletins
Bulletins are installed as part of the upgrade from ESXi 3.5 to ESXi 4.0 Update 3. However, after the upgrade, the esxupdate query command does not list any bulletins.
Workaround: The issue does not affect the core functionality of the host. No workaround.
vCenter Server, vSphere Client, and vSphere Web Access
-
Alarms with health status trigger conditions are not migrated to vSphere 4.0
The vSphere 4.0 alarm triggers functionality has been enhanced to contain additional alarm triggers for host health status. In the process, the generic Host Health State trigger was removed. As a result, alarms that contained this trigger are no longer available in vSphere 4.0.
Workaround:
Use the vSphere Client to recreate the alarms. You can use any of the following pre-configured VMware alarms to monitor host health state:
- Host battery status
- Host hardware fan status
- Host hardware power status
- Host hardware temperature status
- Host hardware system board status
- Host hardware voltage
- Host memory status
- Host processor status
- Host storage status
If the pre-configured alarms do not handle the health state you want to monitor, you can create a custom host alarm that uses the Hardware Health changed event trigger. You must manually define the conditions that trigger for this event alarm. In addition, you must manually set which action occurs when the alarm triggers.
Note: The pre-configured alarms already have default trigger conditions defined for them. You only need to set which action occurs when the alarm triggers.
-
The vSphere Client does not update sensors that are associated with physical events
The vSphere Client does not always update sensor status. Some events can trigger an update, such as a bad power supply or the removal of a redundant disk. Other events, such as chassis intrusion and fan removal, might not trigger an update to the sensor status.
Workaround: None
-
Virtual machines disappear from the virtual switch diagram in the Networking View for host configuration
In the vSphere Client Networking tab for a host, virtual machines are represented in the virtual switch diagram. If you select another host and then return to the Networking tab of the first host, the virtual machines might disappear from the virtual switch diagram.
Workaround: Select a different view in the Configuration tab, such as Network Adapters, Storage, or Storage Adapters, and return to the Networking tab.
-
Starting or stopping the vctomcat Web service at the Windows command prompt might result in an error message
On Windows operating systems, if you use the net start and net stop commands to start and stop the vctomcat Web service, the following error message might appear:
The service is not responding to the control function.
More help is available by typing NET HELPMSG 2186.
Workaround: You can ignore this error message. If you want to stop the error message from occurring, modify the registry to increase the default timeout value for the service control manager (SCM).
For more information, see the following Microsoft KB article: http://support.microsoft.com/kb/922918.
-
The vc-support command uses a 64-bit DSN application and cannot gather data from the vCenter Server database
When you use the VMware cscript vc-support.wsf command to retrieve data from the vCenter Server database, the default Microsoft cscript.exe application is used. This application is configured to use a 64-bit DSN rather than a 32-bit DSN, which is required by the vCenter Server database. As a result, errors occur and you cannot retrieve the data.
Workaround: At a system prompt, run the vc-support.wsf command with the 32-bit DSN cscript.exe application:
%windir%\SysWOW64\cscript.exe vc-support.wsf
-
The vSphere Client Roles menu does not display role assignments for all vCenter Server systems in a Linked Mode group
When you create a role on a remote vCenter Server system in a Linked Mode group, the changes you make are propagated to all other vCenter Server systems in the group.
However, the role appears as assigned only on the systems that have permissions associated with the role. If you remove a role, the operation only checks the status
of the role on the currently selected vCenter Server system. However, it removes the role from all vCenter Server systems in the Linked Mode group without issuing a warning that the role might be in use on the other servers.
Workaround: Before you delete a role from vCenter Server system, ensure that the role is not being used across other vCenter Server systems. To see if a role is in use, go to the Roles view and use the navigation bar to select each vCenter Server system in the group. The role's usage is displayed for the selected vCenter Server system.
See vSphere Basic System Administration to learn about best practices for users and groups as well as information on setting roles for Linked Mode vCenter Server groups.
-
Joining a Linked mode group after installation is unsuccessful if UAC is enabled on Windows Server 2008
When User Account Control (UAC) is enabled on Windows Server 2008 32- or 64-bit operating systems and you try to join a machine to a Linked Mode group on a system that is
already running vCenter Server, the link completes without any errors, but it is unsuccessful. Only one vCenter Server appears in the inventory list.
Workaround: Complete the following procedures.
After installation, perform the following steps to turn off UAC before joining a Linked Mode group:
- Select Start>Setting>Control Panel>User Accounts to open the User Accounts dialog box.
- Click Turn User Account control on or off.
- Deselect User Account Control (UAC) to help protect your computer and click OK.
- Reboot the machine when prompted.
Start the Linked Mode configuration process as follows:
- Select Start > All Programs > VMware > vCenter Server Linked Mode Configuration.
- Click Next.
- Select Modify Linked-Mode configuration and click Next.
- Click Join this vCenter Server instance to an existing Linked-Mode group or another instance and click Next.
- Enter the server name and LDAP port information and click Next.
- Click Continue to complete the installation.
- Click Finish to end the linking process.
Log in to one of the vCenter Servers and verify that the servers are linked. After the vCenter Servers are linked, turn on UAC as follows:
- Select Start > Setting > Control Panel > User Accounts to open the User Accounts dialog box.
- Click Turn User Account control on or off.
- Select User Account Control (UAC) to help protect your computer and click OK.
- Reboot the machine when prompted.
-
Joining two vCenter Server instances fails with an error message in status.txt about failure to remove VMwareVCMSDS
Joining an existing standalone vCenter Server instance to a Linked Mode group causes the vCenter Server installer to fail. When this happens, vCenter Server does not start on the machine where you are performing the installation. Also, messages indicating problems with LDAP connectivity or the LDAP service being unreachable are written to the
/status.txt file, where
is the temporary directory defined on your Windows system. To diagnose this problem, open the status.txt file and look for the following message: [2009-03-06 21:44:55 SEVERE] Operation "Join instance VMwareVCMSDS" failed: : Action: Join Instance
Action: Removal of standalone instance
Action: Remove Instance
Problem: Removal of instance VMwareVCMSDS failed: The removal wizard was not able to remove all of the components. To complete removal, run "Adamuninstall.exe /i:
" after resolving the following error:
Folder '
\VMwareVCMSDS' could not be deleted.
The directory is not empty.
Workaround: Perform the following steps:
- From a command prompt with administrator-level privileges, change directories to the vCenter Server installation directory.
- Delete the
VMwareVCMSDS directory.
- Recover the local LDAP instance by typing
jointool.bat recover.
-
Networking problems and errors might occur when analyzing machines with VMware Guided Consolidation
When a large number of machines are under analysis for Guided Consolidation, the vCenter Collector Provider Services component of Guided Consolidation might be mistaken for a virus or worm by the operating system on which the Guided Consolidation functionality is installed. This occurs when the analysis operation encounters a large number of machines that have invalid IP addresses or name resolution issues. As a result, a bottleneck occurs in the network and error messages appear.
Workaround: Do not add machines for analysis if they are unreachable. If you add machines by name, make sure the NetBIOS name is resolvable and reachable. If you add machines by IP address, make sure the IP address is static.
-
Virtual machine templates stored on shared storage become unavailable after Distributed Power Management (DPM) puts a host in standby mode or when a host is put in maintenance mode
The vSphere Client associates virtual machine templates with a specific host. If the host storing the virtual machine templates is put into standby mode by DPM or into maintenance mode, the templates appear disabled in the vSphere Client. This behavior occurs even if the templates are stored on shared storage.
Workaround: Disable DPM on the host that stores the virtual machine templates. When the host is in maintenance mode, use the Datastore browser on another host that is not in maintenance or standby mode and also has access to the datastore on which the templates are stored to find the virtual machine templates. Then you can provision virtual machines using those templates.
-
You might encounter a LibXML DLL module load error when you fresh install vSphere CLI on some Windows platforms, such as Windows Vista Enterprise SP1 32bit for the first time
-
Incorrect links on ESX and ESXi Welcome page
The download links under vSphere Remote Command Line section, vSphere Web Services SDK section, and the links to download vSphere 4 documentation and VMware vCenter on the Welcome page of ESX and ESXi are wrongly mapped.
Workaround: Download the products from the VMware Web site.
-
On Nexus 1000v, distributed power management cannot put a host into standby
If a host does not have Integrated Lights-Out (iLO) or Intelligent Platform Management Interface (IPMI) support for distributed power management (DPM), that host can still use DPM provided all the physical NICs of the host that are added to Nexus 1000V DVS have Wake-on-LAN support. If even one of the physical NICs is not Wake-on-LAN supported, the host cannot be put into standby by DPM.
Workaround: None.
Virtual Machine Management
-
Custom scripts assigned in vmware-toolbox for suspend power event do not run when you suspend the virtual machine from the vSphere Client
If you have assigned a custom script to the suspend power event in the Script tab of vmware-toolbox and you have configured the virtual machine to run VMware Tools scripts when you perform the suspend scripts, then the custom scripts are not run when you suspend the virtual machine from the vSphere Client.
Workaround: None
-
Automatic VMware Tools upgrade on guest power on reboots the guest automatically without issuing a reboot notification
If you select to automatically update VMware Tools on a Windows Vista or Windows 2008 guest operating system, when the operating system powers on, VMware Tools is updated and the guest operating system automatically reboots without issuing a reboot notification message.
Workaround: None
-
An IDE hard disk added to a hardware version 7 virtual machine is defined as Hard Disk 1 even if a SCSI hard disk is already present
If you have a hardware version 7 virtual machine with a SCSI disk already attached as Hard Disk 1 and you add an IDE disk, the virtual machine alters the disk numbering. The IDE disk is defined as Hard Disk 1 and the SCSI disk is changed to Hard Disk 2.
Workaround: None. However, if you decide to delete one of the disks, do not rely exclusively on the disk number. Instead, verify the disk type to ensure that you are deleting the correct disk.
-
Reverting to snapshot might not work if you cold migrate a virtual machine with a snapshot from an ESX/ESXi 3.5 host to an ESX/ESXi 4.0 host
You can cold migrate a virtual machine with snapshots from an ESX/ESXi 3.5 host to ESX/ESXi 4.0 host. However, reverting to a snapshot after migration might not work.
Workaround: None
-
The vCenter Server fails when the delta disk depth of a linked virtual machine clone is greater than the supported depth of 32
If the delta disk depth of a linked virtual machine clone is greater than the supported depth of 32, the vCenter Server fails and the following error message appears:
Win32 exception: Stack overflow
In such instances, you cannot restart the vCenter Server unless you remove the virtual machine from the host or clean the vCenter Server database. Consider removing the virtual machine from the host rather than cleaning the vCenter Server database, because it is much safer.
Workaround: Perform the following steps:
- Log in to the vSphere Client on the host.
- Display the virtual machine clone in the inventory.
- Right-click the virtual machine and choose Delete from Disk.
- Restart the vCenter Server.
Note: After you restart the vCenter Server, if the virtual machine is listed in the vSphere Client inventory and the Remove from Inventory option is disabled in the virtual machine context menu, you must manually remove the virtual machine entry from the vCenter database.
-
Creating a new SCSI disk in a virtual machine can result in an inaccurate error message
When you create a new SCSI disk in a virtual machine and you set the SCSI bus to virtual, an error message is issued with the following line:
Please verify that the virtual disk was created using the "thick" option.
However, thick by itself is not an option. The option should be eagerzeroedthick.
Workaround: Using the command line, create the SCSI disk with the vmkfstools command and the eagerzeroedthick option.
-
The Installation Boot options for a virtual machine are not exported to Open Virtualization Format (OVF)
When you create an OVF package from a virtual machine that has the Installation Boot option enabled, this option is ignored during export. As a result, the OVF descriptor is missing the InstallSection element, which provides information about the installation process. When you deploy an OVF package, the InstallSection element is parsed correctly.
Workaround: After exporting the virtual machine to OVF, manually create the InstallSection parameters in the OVF descriptor. If a manifest (.mf) file is present, you must regenerate it after you modify the OVF descriptor.
Example:
Specifies that an install boot is needed.
The inclusion of the InstallSection parameters in the descriptor informs the deployment process that an install boot is required to complete deployment. The ovf:initialBootStopDelay attribute specifies the boot delay.
See the OVF specification for details.
-
Virtual machine fails to boot after adding (iLO) virtual CD-ROM without media as a SCSI device *
After adding Integrated Lights-Out (iLO) virtual CD-ROM without media to the virtual machine as a SCSI device, virtual machine fails during booting when trying to boot from the virtual CD-ROM.
The three workarounds for this issue are:
- Ensure that iLO virtual CD-ROM always contains media connected when any virtual machine uses it.
- If virtual CD-ROM is not used for guest operating system installation in the virtual machine, change the boot order in the virtual machine BIOS to list hard disk, floppy disk, and NIC above CD-ROM.
- Avoid the usage of iLO virtual CD-ROM. ESX can connect both local and remote CD-ROM devices and ISO images to the virtual machines, without enforcing the restriction of only one CDROM device being exposed by iLO on a system.
vMotion and Storage vMotion
-
Reverting to a snapshot might fail after reconfiguring and relocating the virtual machine
If you reconfigure the properties of a virtual machine and move it to a different host after you have taken a snapshot of it, reverting to the snapshot of that virtual machine might fail.
Workaround: Avoid moving virtual machines with snapshots to hosts that are very different (for example, different version, different CPU type, etc.)
-
Using Storage vMotion to migrate a virtual machine with many disks might time out
A virtual machine with many virtual disks might be unable to complete a migration with Storage vMotion. The Storage vMotion process requires time to open, close, and process disks during the final copy phase. Storage vMotion migrations of virtual machines with many disks might time out because of this per-disk overhead.
Workaround: Increase the Storage vMotion fsr.maxSwitchoverSeconds setting in the virtual machine configuration file to a larger value. The default value is 100 seconds. Alternatively, at the time of the Storage vMotion migration, avoid running a large number of provisioning operations, migrations, power on, or power off operations on the same datastores the Storage vMotion migration is using.
-
Storage vMotion of NFS volume may be overridden by NFS server disk format
When you use Storage vMotion to migrate a virtual disk to an NFS volume or perform other virtual machine provisioning that involves NFS volumes, the disk format is determined by the NFS server where the destination NFS volume resides. This overrides any selection you made in the Disk Format menu.
Workaround: None
-
If ESX/ESXi hosts fail or reboot during Storage vMotion, the operation can fail and Virtual machines might become orphaned
If hosts fail or reboot during Storage vMotion, the vMotion operation can fail. The destination virtual machine's virtual disks might show up as orphaned in the vSphere inventory after the host reboots. Typically, the virtual machine's state is preserved before the host shuts down.
If the virtual machine does not show up in an orphaned state, check to see if the destination VMDK files exist.
Workaround: You can manually delete the orphaned destination virtual machine from the vSphere inventory. Locate and delete any remaining orphaned destination disks if they exist on the datastore.
VMware High Availability and Fault Tolerance
-
Failover to VMware FT secondary virtual machine produces error message on host client
When VMware Fault Tolerance is failing over to a secondary virtual machine, if the host chosen for the secondary virtual machine has recently booted, the host client sees this attempt as failing and displays the following error message:
Login failed due to a bad username or password.
This error message is seen because the host has recently booted and it is possible that it has not yet received an SSL thumbprint from the vCenter Server. After the thumbprint is pushed out to the host, the failover succeeds. This condition is likely to occur only if all hosts in an FT-enabled cluster have failed, causing the host with the secondary virtual machine to be freshly booted.
Workaround: None. The failover succeeds after a few attempts.
-
Changing the system time on an ESX/ESXi host produces a VMware HA agent error
If you change the system time on an ESX/ESXi host, after a short time interval, the following HA agent error appears:
HA agent on <server> in <cluster> in <data center> has an error.
This error is displayed in both the event log and the host's Summary tab in the vSphere Client.
Workaround: Correct the host's system time and then restart vpxa by running the service vmware-vpxa restart command.
-
VMware Fault Tolerance does not support IPv6 addressing
If the VMkernel NICs for Fault Tolerance (FT) logging or vMotion are assigned IPv6 addresses, enabling Fault Tolerance on virtual machines fails.
Workaround: Configure the VMkernel NICs using the IPv4 addressing.
-
Hot-plugging of devices is not supported when FT is disabled on virtual machines
The hot-plugging feature is not supported on virtual machines when VMware Fault Tolerance is enabled or disabled on the virtual machines. You must turn off Fault Tolerance temporarily before you hot-plug a device. After hot-plugging, you can turn-on the FT. But after a hot-removal of a device, you should reboot the virtual machine to turn-on the FT.
VMware Tools
-
Windows Server 2008 R2 64-bit IP virtualization might not work on vSphere 4.0 Update 1 *
IP virtualization, which allows you to allocate unique IP addresses to RDP sessions might not work on a Windows 2008 R2 Terminal Server running on vSphere 4.0 Update 1. However, IP virtualization works when you configure a physical Windows 2008 R2 Terminal Server, or when you run a Windows 2008 R2 virtual machine on XenServer 5.5 Update 2 Dell OEM Edition.
This issue might occur if you install VMware Tools after installing Remote Desktop services.
Workaround: Choose custom installation for installing VMware Tools, and remove VMCI from the list of drivers that should be installed.
-
Virtual machine snapshots stop responding when certain conditions apply
Attempts to take a virtual machine snapshot might result in the display of the in progress status when all of the following conditions apply:
- The Snapshot the virtual machine’s memory option is not selected.
- The quiesce guest file system option is selected.
- A third-party VSS (Volume Shadow copy Service) provider is installed
In such cases, the in progress status continues to be displayed until the task display times out. Moreover, the process continues, preventing other snapshots from being taken.
Top of Page
|