VMware ESXi 4.0 Update 1 Release Notes
ESXi 4.0 Update 1 Installable | 19 Nov 2009 | Build 208167
ESXi 4.0 Update 1 Embedded | 19 Nov 2009 | Build 208167
Last Document Update: 18 Dec 2009 |
These release notes include the following topics:
What's New
The following information provides highlights of some of the enhancements available in this release of VMware ESXi:
VMware View 4.0 support – This release adds support for VMware View 4.0, a solution built specifically for delivering desktops as a managed service from the protocol to the platform.
Windows 7 and Windows 2008 R2 support – This release adds support for 32-bit and 64-bit versions of Windows 7 as well as 64-bit Windows 2008 R2 as guest OS platforms. In addition, the vSphere Client is now supported and can be installed on a Windows 7 platform. For a complete list of supported guest operating systems with this release, see the VMware Compatibility Guide.
Enhanced Clustering Support for Microsoft Windows – Microsoft Cluster Server (MSCS) for Windows 2000 and 2003 and Windows Server 2008 Failover Clustering is now supported on a VMware High Availability (HA) and Dynamic Resource Scheduler (DRS) cluster in a limited configuration. HA and DRS functionality can be effectively disabled for individual MSCS virtual machines as opposed to disabling HA and DRS on the entire ESX/ESXi host. Refer to the Setup for Failover Clustering and Microsoft Cluster Service guide for additional configuration guidelines.
Enhanced VMware Paravirtualized SCSI Support – Support for boot disk devices attached to a Paravirtualized SCSI ( PVSCSI) adapter has been added for Windows 2003 and 2008 guest operating systems. Floppy disk images are also available containing the driver for use during the Windows installation by selecting F6 to install additional drivers during setup. Floppy images can be found in the /vmimages/floppies/ folder.
Improved vNetwork Distributed Switch Performance – Several performance and usability issues have been resolved resulting in the following:
- Improved performance when making configuration changes to a vNetwork Distributed Switch (vDS) instance when the ESX/ESXi host is under a heavy load
- Improved performance when adding or removing an ESX/ESXi host to or from a vDS instance
Increase in vCPU per Core Limit – The limit on vCPUs per core has been increased from 20 to 25. This change raises the supported limit only. It does not include any additional performance optimizations. Raising the limit allows users more flexibility to configure systems based on specific workloads and to get the most advantage from increasingly faster processors. The achievable number of vCPUs per core depends on the workload and specifics of the hardware. For more information see the Performance Best Practices for VMware vSphere 4.0 guide.
Enablement of Intel Xeon Processor 3400 Series – Support for the Xeon processor 3400 series has been added. For a complete list of supported third party hardware and devices, see the VMware Compatibility Guide.
Resolved Issues – In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.
Top of Page
Prior Releases of VMware vSphere 4
Features and known issues from prior releases of ESXi 4.0 are described in the release notes for each release. To view release notes for prior releases of ESXi 4.0, click one of the following links::
Before You Begin
ESXi, vCenter Server, and vSphere Client Version Compatibility
The VMware vSphere Compatibility Matrixes provide details on the compatibility of current and previous versions of VMware vSphere components, including ESXi, vCenter Server, the vSphere Client, and optional VMware products. In addition, check the vSphere 4.0 Compatibility Matrixes for information on supported management and backup agents before installing ESXi or vCenter Server.
Hardware Compatibility
- Learn about hardware compatibility:
The Hardware Compatibility Lists are available on the Web-based Compatibility Guide at http://www.vmware.com/resources/compatibility. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides and provides the option to search the guides, and save the search results in PDF format.
Subscribe to be notified of Compatibility Guide updates via 
- Learn about vSphere compatibility:
VMware vSphere Compatibility Matrixes (PDF)
Documentation
The documentation has been updated for VMware vSphere 4.0 Update 1.
Installation and Upgrade
Read the ESXi Installable and vCenter Server Setup Guide Update 1 for step-by-step guidance on installing and configuring ESXi Installable and vCenter Server or the ESXi Embedded and vCenter Server Setup Guide Update 1 for step-by-step guidance on setting up ESXi Embedded and vCenter Server.
After successful installation of ESXi Installable or successful boot of ESXi Embedded, several configuration steps are essential. In particular, some licensing, networking, and security configuration is necessary. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.
Future releases of VMware vSphere might not support VMFS version 2 (VMFS2). VMware recommends upgrading or migrating to VMFS version 3 or higher. See the vSphere Upgrade Guide Update 1.
Future releases of VMware vCenter Server might not support installation on 32-bit Windows operating systems. VMware recommends installing vCenter Server on a 64-bit Windows operating system. If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide Update 1 for instructions on installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.
The VMware Tools RPM installer, which was previously available on the VMware Tools ISO image for Linux guest operating systems, has been removed for ESXi. VMware recommends using the tar.gz installer to install VMware Tools on virtual machines with Linux guest operating systems.
Management Information Base (MIB) files related to ESXi are not bundled with vCenter Server. Only MIB files related to vCenter Server are shipped with vCenter Server 4.0. All MIB files can be downloaded from the VMware Web site at http://www.vmware.com/download.
Upgrading or Migrating to ESXi 4.0 Update 1
vSphere 4.0 Update 1 offers following applications that you can use to upgrade to ESXi 4.0 Update01:
- vSphere Host Update Utility—For standalone hosts. A standalone host is an ESX/ESXi host that is not managed by vCenter Server. See the vSphere Upgrade Guide Update 1 for more information.
- VMware vCenter Update Manager—For ESX/ESXi hosts that are managed by vCenter Server. See the VMware vCenter Update Manager Administration Guide Update 1 for more information.
- vihostupdate Command-Line Utility- The vihostupdate command applies software updates to ESX/ESXi hosts and installs and updates ESX/ESXi extensions such as VMkernel modules, drivers, and CIM providers. See the vSphere Upgrade Guide Update1 for more information.
Supported Upgrade Paths for Host Upgrade to ESXi 4.0 Update 1
Upgrade Type |
ESXi Server 3.5 |
ESXi Server 3.5
Includes:
Update 1
Update 2
Update 3
Update 4
Update 5
|
ESXi 4.0 |
Upgrade with ESXi 4.0 Update 1 upgrade ZIP |
Yes |
Yes |
No |
Upgrade using ESXi 4.0 Update 1 offline Bundle |
No |
No |
Yes |
Notes:
- For upgrade using upgrade ZIP, supported approaches are vSphere Host Update Utility and vCenter Update Manager. For more details, see vSphere Upgrade Guide Update 1 and VMware vCenter Update Manager Administration Guide Update 1 .
- For using ESXi 4.0 Update 1 offline-bundle, the supported approach is the vihostupdate command-line utility. For more details, see vSphere Upgrade Guide Update 1.
- For upgrading the ESXi 4.0 host to ESXi 4.0 Update 1 using online patches and vCenter Update Manager 4.0 Update 1, see the patches section of these release notes and vCenter Update Manager Administration Guide 4.0 U1.
- For upgrading the ESXi 4.0 Host to ESXi 4.0 Update 1 using online patches and vSphere Host Update Utility, refer to the patches section of these release notes and vSphere Upgrade Guide Update 1.
Upgrading VMware Tools
This version of VMware ESXi requires a VMware Tools upgrade.
Patches Contained in this Release
In addition to ISO images, the ESXi 4.0 Update 1 release, both embedded and installable, is distributed as a patch that can be applied to existing installations of ESXi 4.0 software.
The patch bundle can be downloaded from the VMware Download Patches page, or can be applied using VMware Update Manager.
The patch bundle contains the same bulletins as the individual bulletins that can be seen in VMware Update Manager. The following is the patch bundle:
Patch Release ESXi400-Update01 contains the following individual bulletins:
This release also contains all patches for the ESXi Server software that were released prior to the release date of this product. See the VMware Download Patches page for more information on the individual patches.
ESXi 4.0 Update 1 also contains all fixes in the following previously released bundles:
See the documentation listed on the download page for more information on the contents of each patch.
Top of Page
Resolved Issues
This release resolves issues in the subject areas that follow:
† Resolved issues previously documented as known issues.
CIM and API
-
ESXi Lockdown mode not handled correctly
The CIMOM on an ESXi host shows these problems while the host is in Lockdown mode:
- The PowerManagementService.Reboot() method
incorrectly reports success while the host is in lockdown mode.
- The RecordLog.ClearLog() method succeeds, but it should be rejected.
- During Lockdown mode, the CIMOM makes system calls in a loop,
which could increase CPU utilization to 100%.
- The repeated system calls cause authentication failure messages to be written
to system logs.
These problems have been corrected.
The CIMOM now rejects the following extrinsic method calls during
lockdown:
- Requests authenticated using the 'root' username are always rejected during lockdown.
- The PowerManagementService.Reboot() method always fails during lockdown
because the PowerManagementService provider always authenticates with the host
when it executes. During lockdown, the host does not accept authentication requests.
The CIMOM accepts the following extrinsic method calls during lockdown mode:
- Extrinsic method calls other than PowerManagementService.Reboot() that are authenticated with a user name other than 'root' if the user name
has vSphere administrative privileges on the host.
These method calls continue to be authenticated from the authentication cache.
- Extrinsic methods that are authenticated with a 'ticket'
acquired from
an AcquireCIMServicesTicket() request to the vSphere Web Services API.
The ticket can only be issued if the host is managed by vCenter,
and only before the host enters lockdown mode.
Ticket authentication is valid for the RecordLog.ClearLog() method.
However, the PowerManagementService.Reboot() method is an exception.
Guest Operating System
-
Virtual machines sometimes fail with a blue screen when hardware acceleration is enabled fully
Virtual machine fails displaying a blue screen when you run certain applications with the hardware acceleration enabled fully in Windows guest operating system.
This is an issue with the SVGA driver, which is resolved in this release.
-
Fixes an issue where VMware Tools reports incorrect disk information through GuestInfo for Linux guests using Logical Volume Manager (LVM) partitions
-
New: Overestimated memory usage of guest operating system causes alarms in vCenter Server to go off spuriously
A guest operating system's memory usage might be overestimated on Intel systems that support EPT technology or on AMD systems that support RVI technology. This might cause the memory alarms in the vCenter Server to go off even if the guest operating system is not actively accessing a lot of memory. This issue is resolved in this release.
-
Font rendering issue in virtual machines when viewing in widescreen
When a virtual machine is configured for widescreen resolution, fonts appear distorted in Microsoft Office applications.
This issue appears when the resolution is set to 2560 x 1024.
The issue is resolved in this release.
-
New: System commands fail while using the guest SDK Library on vMA 4.0 system
When you try to use the guest SDK Library on a vSphere Management Assistant (vMA) 4.0 system, the system commands fail, and the following error might be displayed: Cannot read file data: Error 21
This issue occurs because the guest SDK Libraries cannot be found on the system with directories available to the ldconfig command.
This issue is resolved in this release.
-
If you upgrade VMware Tools or the VSS components in VMware Tools to version 4.0, applications that require the msvcp71.dll file fail to start when a virtual machine is rebooted
This issue is resolved in this release.
-
Fixes an issue where e1000 vNIC emulation does not function properly under the OS/2 guest operating system
This fix includes updated e1000 vNIC emulation to work around the issue.
Miscellaneous
-
New: After starting ESX/ESXi host a warning message is logged in /var/log
After powering on or restarting the ESX/ESXi host, the following warning message is logged in the /var/log/messages file:
Peer table full for sfcbd.
You can ignore this message. It does not indicate any issues with ESX/ESXi.
This issue is resolved in this release.
-
A series of vmklinux heap allocation warnings are followed by an ESX/ESXi system failure
This issue is caused by an erroneous response to a legitimate overcommitment of memory. When memory runs low from heavy swapping or vMotion use, a vmklinux limitation might be encountered. Specifically, the problem is triggered by a shortage of memory located below address 4GB. In such a situation, a series of log messages warn of a failure to allocate memory for the vmklinux heap. ESX/ESXi then becomes unavailable, logging exception 14 in a helper world. The following log excerpt is indicative of the messages logged:
[1:01:35:02.450 cpu7:4480)WARNING: Heap: 1471: Could not allocate 2093688 bytes for dynamic heap vmklinux. Request returned Out of memory
[1:01:35:02.450 cpu7:4480)WARNING: Heap: 1645: Heap_Align(vmklinux, 1024/1024 bytes, 8 align) failed. caller: 0x4180303746b7
[1:01:35:02.450 cpu7:4480)WARNING: Heap: 1471: Could not allocate 2093688 bytes for dynamic heap vmklinux. Request returned Out of memory
[1:01:35:02.450 cpu7:4480)WARNING: Heap: 1645: Heap_Align(vmklinux, 1024/1024 bytes, 8 align) failed. caller: 0x4180303746b7
[VMware ESX [Releasebuild-164009 X86_64]
#PF Exception(14) in world 4480:helper18-7 ip 0x41803037480c addr 0x0
While this issue might in theory occur on ESXi, all observations have been with ESX installations. ESX has a higher vulnerability due to the use of low memory by the service console.
This issue is resolved in this release.
Networking
-
When Intel MT and PT dualport NICs are in promiscuous mode the vlan filter is turned on
This results in dropped Cisco Discovery Protocol (CDP) packets. This fix adds checks for CDP packets with VLAN tags, sanity checks on incoming packets, and logging output for parsing errors.
-
Fixes an issue with the NetXen nx_nic network driver on NX2031 cards where ESX might stop responding after a queue stop on systems with more than 32GB of memory
-
Fixes an issue where ESX might disable the CDP daemon in the console operating system
-
Updates the description of Broadcom NetXtreme BCM5722 network adapter in vCenter for some Dell servers
The description in vCenter for Broadcom NetXtreme BCM5722 network adapter on some Dell servers which contained a few unnecessary words are removed and updated to NetXtreme BCM5722 Gigabit Ethernet. The description is updated for Dell PowerEdge T105, R300 and T300 servers.
-
Fixes an issue where ESX might fail with the nx_nic driver after decoding an Ethernet header as an IP header
This fix sets the LRO enabled variable to 0 because Large Receive Offload (LRO) is not supported in ESX 4.0 Update 1.
Security
-
Fixes an issue where the NTPD daemon might have a stack-based buffer overflow if it is configured to use the autokey security model
The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2009-1252 to this issue.
-
A stack-based buffer overflow in ISC dhclient might allow remote DHCP servers to execute arbitrary code by using a crafted subnet-mask option
The Common Vulneras and Exposures Project (cve.mitre.org) has assigned the name CVE-2009-0692 to this issue.
Server Configuration
-
ESXi 4.0 server sometimes becomes unresponsive when host memory is heavily over-committed
ESXi 4.0 server sometimes becomes unresponsive when the host memory is over-committed where the usage is more than 200%, and the server is configured as a Long Uptime Server.
All virtual machines fail when the server is not responding to any actions. This issue is resolved in this release.
-
Fixes an issue where the BIOS version reported by CIM OMC_SMASHFirmwareIdentity is different from dmidecode on some machines
-
Fixes an issue where the DRAC gets the operating system name through IPMI and displays VMware ESX Server for both ESX and ESXi
This fix replaces the hard coded name with VMware_HypervisorSoftwareIdentity.ElementName.
-
Fixes an issue where the version information retrieved for the VMware_HypervisorSoftwareIdentity is hard coded at build time
If CIM components are patched individually, the build values for this provider might not match the same build number or version information in other components or applications. Once this fix is applied version information is retrieved using a common mechanism utilized by other components in the system.
-
New: ESXi DCUI fails if management network test is interrupted and then restarted
The ESXi Direct Console User Interface (DCUI) fails if you interrupt a management network test by pressing the Esc key and then start the test again.
This issue is resolved in this release.
Storage
-
Fixes an issue where the Dell MegaRAID SAS1078R controller is not identified as PERC 6 because the PCI ID subsystem information is not handled properly
-
ESX Server might stop responding when NFS volume goes offline
When a mounted NFS volume goes offline in an ESX Server cluster, it might cause the heap size to grow and might cause the ESX Server to stop responding.
-
New: Applications might fail and display I/O error during a LUN reset
During a LUN or SCSI bus reset on an RHEL 5 guest operating system, the applications in the virtual machine might fail due to an I/O error. This issue occurs only when you are using the PVSCSI adapter and ESX iSCSI initiator. This issue is resolved in this release.
-
Proxy file path access to an SMB/CIFS shared storage fails
Booting a virtual machine from a CD-ROM fails, where the ISO file is located on a share mounted using SMB/CIFS protocol.
This issue occurs because the proxy file path access is denied when using SMB/CIFS protocol. This issue is resolved in this release.
-
This fix increases the memory heap for the HPSA driver so that it can handle more than 2 controllers
-
New: SCSI abort failure reported during testing of multiple virtual machines
Since the HP Smart Array driver does not support SCSI abort, a device reset is issued instead. The immediacy of the reset can disrupt more concurrent IO, causing more IO failures and SCSI abort failures.
This issue is resolved in this release in that the reset is issued after a slight delay.
Supported Hardware
-
vCenter issues an error when the Power.CpuPolicy configuration option is changed to dynamic
While changing the Power.CpuPolicy option from static to dynamic, vCenter issues the following error message:
The value entered is not valid. Enter another value
This error appears because ESX/ESXi 4.0 attempts to change the system's CPU power management policy to dynamic even when the BIOS does not properly support processor performance states (P-states).
This issue is resolved in this release.
Upgrade and Installation
-
Upgrading ESX Server 3i 3.5 to ESXi 4.0 fails in specific casesThis issue only occurs with installations on serial attached SCSI (SAS) disks or fibre channel (FC) disks. In such cases, when you attempt to upgrade ESX Server 3i 3.5 installed on SAS or FC disk, the following error occurs during the upgrade:
Unsupported boot disk. The boot device layout on the host will not support the upgrade
Note that this issue is one of a variety that can cause the preceding error to appear.
This issue is resolved in this release.However, be aware that an in-place upgrade of ESXi Installable is not possible on a boot LUN which also contains a VMFS partition.
-
Update: ESXi Installable upgrade through vSphere Host Update Utility fails with the error “ERROR: Unsupported boot disk”
This error can occur when a large amount of local storage exists for the boot device of the host. In such a scenario, a precheck script for upgrade fails because the script does not have large integer support. However, such support is necessary when the storage size reaches a certain point. Therefore, the precheck fails because of a limitation in the script not because of an upgrade support issue.
The full error for this issue is as follows:
ERROR: Unsupported boot disk
The boot device layout on the host does not support upgrade
This issue is resolved in this release.
-
Fixes an issue where installing VMware Tools overwrites the existing virtual printer drivers "TPOG" and "TPOGPS" if ThinPrint's .print server is installed
This fix checks for a registry entry created by .print, if this registry entry is detected the virtual printer drivers bundled with VMware Tools will not be installed.
vCenter Server and vSphere Client
-
vSphere Client does not show correct identifier for FreeBSD operating system
The Summary tab in the vSphere Client does not display the correct identifier for FreeBSD guest operating system.
This issue is resolved in this release.
Virtual Machine Management
-
New: SLES 10 guest operating system is incorrectly reported in vCenter as SLES 8/9 when VMware tools is running
When ESX 4.0 is managed by vCenter 4.0 and VMware Tools is running, the Summary tab of the vSphere Client displays Suse Linux Enterprise 10 (32-bit) guest operating system as Suse Linux Enterprise 8/9 (32-bit), and Suse Linux Enterprise 10 (64-bit) as Suse Linux Enterprise 8/9 (64-bit).
-
Fixes an issue where ESX might fail if an excessive number of synctime RPC messages build up in the queue, which results in VMX running out of memory
This fix limits the number of synctime RPC messages in the queue to 1.
-
New: Networking performance data is missing when the VMXNET3 adapter is used
The Networking panel is missing in the Performance tab of a virtual machine when a guest is using a VMXNET Generation 3 adapter. If a virtual machine has a mix of virtual adapters, the Networking panel for adapters of a type other than VMXNET3 is still displayed.
This issue is resolved in this release.
-
Fixes an issue where a virtual machine's heartbeat status might appear healthier than it is
vMotion and Storage vMotion
-
The fix for a previously identified vMotion failure might prevent the migration (of virtual machines with video ram greater than 30MB only) to a host without the fix
ESX/ESXi 4.0 Update 1 fixes the vMotion failure as described in KB 1011971. However, using vMotion to migrate a virtual machine with video ram of greater than 30MB to an ESX/ESXi 4.0 Update 1 host might prevent you from migrating back to a host that does not have this fix.
-
New: Applications such as MPlayer and MEncoder running in a virtual machine fail with an illegal instruction
A variety of applications run Supplemental Streaming SIMD Extension 3 (SSSE3) instructions, such as MPlayer and MEncoder.
These applications, when run in a virtual machine, fail with an illegal instruction error following a vMotion migration or a suspension and resumption of the virtual machine. On rare occasions, this type of failure occurs during normal execution.
This issue is resolved in this release.
-
New: When using the vMotion feature, a migration failure occurs followed by a system failure of the destination host
When a virtual machine migration using vMotion fails due to a rare resume error, the source virtual machine might retain a stale swap state. Any subsequent migration attempt from the source virtual machine can result in a destination-host system failure.
This issue is resolved in this release.
VMware High Availability and Fault Tolerance
-
Enabling HA fails when the ESX/ESXi host does not have DNS connectivity
If the ESX/ESXi host does not have DNS connectivity, when you enable or configure VMware HA, and the host short name is not populated in the /etc/hosts file, enabling or configuring HA might fail. This issue is resolved in this release.
-
When enabling FT, the Secondary VM starts for a few seconds and then fails, which causes the Primary VM to go into the Need Secondary VM state
When Primary and Secondary VMs for FT run on hosts with mixed steppings of Intel Xeon 5400 or 5200 Series Processors (CPUID Family 6, Model 23, steppings 6 and 10), the Secondary VM starts for a few seconds and then fails, which causes the Primary VM to go into the Need Secondary VM state.
This issue is resolved in this release.
Top of Page
Known Issues
This section describes known issues in this release in the following subject areas:
Backup
-
VMware Consolidated Backup (VCB) 1.5 Update 1 with Windows 7 and Windows 2008 R2 x64
VMware Consolidated Backup (VCB) 1.5 Update 1 supports full virtual machine backups and restores of Windows 7 and Windows 2008 R2 x64 guest operating systems. However, file level backups are not supported with Windows 7 or Windows 2008 R2 x64 guests.
-
VMware Consolidated Backup (VCB) is not supported with Fault Tolerance
A VCB backup performed on an FT-enabled virtual machine powers off both the primary and the secondary virtual machines and might render the virtual machines unusable.
Workaround: None
Guest Operating System
-
Solaris 10 U4 virtual machine becomes nonresponsive during VMware Tools upgrade
Upgrading or restarting VMware Tools in a Solaris 10 U4 virtual machine with an advanced vmxnet adapter might cause the guest operating system to become nonresponsive and the installation to be unable to proceed.
Solaris 10 U5 and later versions are not affected by this issue.
Workaround: Before installing or upgrading VMware Tools, temporarily reconfigure the advanced vmxnet adapter by removing its autoconfiguration files in /etc/ or removing the virtual hardware.
-
On ESXi, VMware Tools installation fails to find PBMs for certain Linux guest operating systems
The default installer for VMware Tools only provides prebuilt modules (PBMs) for a subset of supported Linux guest operating systems.
Workaround: From the VMware Web site, download an alternative Linux Tools ISO image that contains VMware Tools for supported as well as a variety of older and unsupported Linux guest operating systems. Alternatively, you can compile kernel modules for the unsupported Linux release by using the install-vmware.pl script distributed as part of VMware Tools.
-
Devices attached to hot-added BusLogic adapter are not visible to Linux guest
Devices attached to hot-added BusLogic adapter are not visible to a Linux guest if the guest previously had another BusLogic adapter. In addition, hot removal of the BusLogic adapter might fail. This issue occurs because the BusLogic driver available in Linux distributions does not support hot plug APIs. This problem does not affect performing hot add of disks to the adapter, only performing hot add of the adapter itself.
Workaround: Use a different adapter, such as a parallel or SAS LSI Logic adapter, for hot add capabilities. If a BusLogic adapter is required, attempt a hot remove of the adapter after unloading the BusLogic driver in the guest. You can also attempt to get control of the hot-added adapter by loading another instance of the BusLogic driver. You can load another instance of the BusLogic adapter by running the command modprobe -o BusLogic1 BusLogic (where you replace BusLogic1 with BusLogic2, BusLogic3 for BusLogic2, and so on, for every hot add operation).
-
Virtual machines with WindowsNT guests require a response to a warning message generated when the virtual machine attempts to automatically upgrade VMware Tools
If you set the option to automatically check and upgrade VMware Tools before each power-on operation for WindowsNT guests, the following warning message appears:
Set up failed to install the vmxnet driver Automatically, This driver will have to be installed manually
Workaround: The upgrade stops until the warning is acknowledged. To complete the upgrade, log into the WindowsNT guest and acknowledge the warning message.
-
Multiple DNS suffixes are not applied correctly after image customization of Linux distributions
DNS suffixes are automatically appended when a Linux distribution tries to resolve a DNS domain name. When more than one DNS suffix is customized, only the last DNS suffix is applied. Depending on the Linux distribution, not all customized DNS suffixes appear in the Linux distribution user interface.
Workaround: None
-
Creating a virtual machine of Ubuntu 7.10 Desktop can result in the display of a black screen
When you run the installation for the Ubuntu 7.10 Desktop guest on a virtual machine with paravirtualization enabled on an AMD host, the screen of the virtual machine might remain blank. The correct behavior is for the installer to provide instructions for you to remove the CD from the tray and press return.
Workaround: Press the return key. The installation proceeds to reboot the virtual machine. Furthermore, this issue does not occur if you start the installation on a virtual machine with two or more virtual processors.
-
New:An automatic VMware Tools upgrade might fail for a Red Hat Enterprise Linux 5.x virtual machine
An automatic VMware Tools upgrade might fail with the Error upgrading VMware Tools error for a Red Hat Enterprise Linux 5.x virtual machine cold migrated from an ESX 3.0.3 host to an ESX/ESXi 4.0 Update 1 host.
Workaround: Manually upgrade VMware Tools on the ESX 4.0 Update 1 host.
-
New: Wake-on-LAN does not work with the e1000 vNIC on newer Windows guests
For ESX/ESXi hosts, the Wake-on-LAN feature (turning on a host with a network message) is not available with the e1000 vNIC on certain Windows guests. Specifically, Windows from Vista forward and 64-bit versions do not work.
Workaround: Use a vNIC type that supports Wake-on-LAN, such as VMXNET3
-
For Windows Vista and Windows 7 guests, video output might be displayed incorrectly
Windows Media Player in a Windows Vista or Windows 7 guest may incorrectly display video files when the video is scaled.
Workaround: Either of the following actions are appropriate as a workaround for this issue:
- Play video in full screen mode (ALT + ENTER).
- Uncheck Fit Video to player on resize.
-
Windows 7 operating system with Media Player 11 is not supported
Microsoft Windows 7 guest operating system which contains Windows Media Player 11 is not supported on a virtual machine. If you try to run the media player and maximize the media player window on a virtual machine that is running Microsoft Windows 7 operating system, the virtual machine might fail.
Internationalization
Licensing
Miscellaneous
-
New: The diagnostic partition is not initialized until a system failure occurs
By default the diagnostic partition (or dump partition) is not initialized. Trying to collect the information in the diagnostic partition, for example by running the vm-support command, creates a harmless error message indicating that the diagnostic partition is not initialized.
Workaround: None. This issue does not affect the processing of the vm-support command. You can safely ignore the error message.
-
Stopping or restarting the vCenter Server service through the Windows Services Control MMC plug-in might lead to an error message
Under certain circumstances, the vCenter Server service might take longer than usual to start. Stopping and restarting the vCenter Server service through the Windows Services Control MMC plug-in might lead to the following error message:
Service failed to respond in a timely manner .
This message indicates that the time required to shut down or start up vCenter Server was more than the configured system-wide default timeout for starting or stopping the service.
Workaround: Refresh the Services Control screen after a few minutes, which should show that the service has been correctly stopped and restarted.
-
Help menu items appear inactive or clicking other links for Help results in an error
In vSphere Client installations on machines running non-English, non-Japanese, non-German, and non-Simplified Chinese Windows operating systems, Help menu items appear inactive. In addition, if you click other links or buttons for Help within the vSphere Client the following error message appears:
Missing help file
Workaround: Copy the contents of the vSphere Client online help folder from
C:\Program\VMware\Infrastructure\Virtual Infrastructure Client\Help\en\VIC40
to
C:\Program\VMware\Infrastructure\Virtual Infrastructure Client\Help\en .
Ensure that only the contents of the vSphere Client online help subdirectory (VIC40 ) is copied to this level.
To view the online help for other product components under C:\Program\VMware\Infrastructure\Virtual Infrastructure Client\Help\en , double-click the index.html file in the subdirectory for that help system. For example, to view the DRS Troubleshooting help system, double-click the index.html file in the DSR40 subdirectory. The subdirectories for online help systems for other vSphere modules at this location vary depending on the vSphere products you have installed.
Networking
-
NetXen chipset does not have hardware support for VLANs
The NetXen NIC does not display Hardware Capability Support for VMNET_CAP_HW_TX_VLAN and VMNET_CAP_HW_RX_VLAN. This occurs because NetXen chipset does not have hardware support for VLANs. NetXen VLAN support is available in software.
-
The Custom creation of a virtual machine allows a maximum of four NICs to be added
During the creation of a virtual machine using the Custom option, vSphere Client provides the Network configuration screen. On that screen, you are queried about the number of NICs that you would like to connect. The drop down menu allows up to four NICs only. However, 10 NICs are supported on ESX/ESXi 4.0 Update 1.
Workaround: Add more NICs with the task that follows.
- Using the vSphere Client, navigate to Home>Inventory>VMs and Templates.
- With the Getting Started tab selected, click Edit virtual machine settings.
- Click Add.
- Select Ethernet Adapter and click Next.
- Continue selecting the appropriate settings for your specific scenario.
-
VMware Distributed Power Management (DPM) fails to wake a host from standby when the host is configured using Wake-On-LAN and a NetXen NIC for its vMotion network
The driver for NetXen NICs (nx_nic) that is included in this release advertises Wake-on-LAN support for all NetXen NICs, but the Wake-on-LAN feature does not work on most NICs when using this version of the driver. The only NetXen NIC for which Wake-on-LAN is known to work with this driver is the NetXen HP NC375i Integrated Quad Port Multifunction Gigabit Server Adapter, commonly found in the HP ProLiant ML370 G6. Because the driver advertises Wake-on-LAN support for other NetXen NICs as well, DPM is unaware that the support does not work for those NICs and will attempt to use it if configured to do so.
Workaround: If the host has support for IPMI or iLO as a wake protocol, configure DPM to use one of those protocols instead of Wake-on-LAN. Otherwise, install a NIC with working Wake-on-LAN support, or disable DPM on this host.
-
Switching a VMkernel NIC using DHCPv6 from static DNS attribution to DHCP DNS will not update the DNS name servers
Using the service console, vSphere CLI, or vSphere Client, to switch an Internet Protocol version 6 (IPv6) VMkernel NIC using Dynamic Host Configuration Protocol version 6 (DHCPv6) from static Domain Name System (DNS) attribution to DHCP DNS will not update the DNS name servers until the next DHCPv6 lease renewal.
Workaround: Manually disable and then re-enable the VMkernel NIC to acquire the new DNS name servers. You can accomplish this by selecting the Restart Management Network option in the Direct Console User Interface, the keyboard-only user interface for ESXi. If you take no action, the DNS name server is acquired when the DHCPv6 lease is renewed.
-
The VmwVmNetNum of VM-INFO MIB displays as Ethernet0 when running snmpwalk
When snmpwalk is run for VM-INFO MIB on an ESX/ESXi host, the VmwVmNetNum of VM-INFO MIB is displayed as Ethernet0 instead of Network Adapter1 while the MOB URL in the VmwVmNetNum of VM-INFO description displays as Network Adapter1.
Workaround: None
-
Applications that use VMCI Sockets might fail after virtual machine migration
If you have applications that use Virtual Machine Communication Interface (VMCI) Sockets, the applications might fail after virtual machine migration if the VMCI context identifiers used by the application are already in use on the destination host. In this case, VMCI stream or Datagram sockets that were created on the originating host stop functioning properly. It also becomes impossible to create new stream sockets.
Workaround: For Windows guest operating systems, reload the guest VMCI driver by rebooting the guest operating system or enabling the device through the device manager. For Linux guests, shut down applications that use VMCI Sockets, remove and reload the vsock kernel module and restart the applications.
-
Applying port groups with multiple statically assigned VMKNICs or VSWIFs results in repeated prompts for an IP address
Applying a vDS portgroup with multiple statically assigned VMKNICs or VSWIFs causes a situation in which the user is repeatedly prompted to enter an IP address. DHCP assigned interfaces are not affected.
Workaround: Use only one statically assigned VMKNIC or VSWIF per portgroup. If multiple statically assigned VMKNICs are desired on the same vDS portgroup, then assign each VMKNIC or VSWIF to a unique set of services (for example, vMotion, Fault Tolerance, and other services).
-
DPM cannot put a host in standby mode when the VMkernel vMotion NIC is part of a vDS and the host is configured to use Wake-on-LAN for remote power-on
If a host's VMkernel vMotion NIC is part of a vDS, the NIC is configured to not support Wake-on-LAN. Therefore, unless the host is configured to use IPMI or iLO for remote power-on, it is considered incapable of remote power-on and DPM cannot automatically move it into standby mode. DPM selects other hosts to put into standby mode, if possible. If you attempt to put the host into standby mode manually, the attempt fails and an Enter Standby Stopped dialog box appears.
Workaround: Use IPMI or iLO to power on hosts that support one of these protocols instead of Wake-on-LAN by configuring the IPMI or iLO credentials on each host. Alternatively, if you need to use Wake-on-LAN to power on hosts, configure the VMkernel vMotion interface on a vNetwork Standard Switch (vSwitch), rather than on a vDS.
-
The Retrieval of DNS and host name information from the DHCP server might be delayed or prevented
-
New: Changing the network settings of an ESX/ESXi host prevents some hardware health monitoring software from auto-discovering it
After the network settings of an ESX/ESXi host are changed, the third-party management tool that relies on the CIM interface (typically hardware health monitoring tools) are unable to discover automatically the host through the Service Location Protocol (SLP) service.
Workaround: Manually enter the hostname or IP address of the host in the third-party management tool. Alternatively, restart slpd and sfcbd-watchdog by using the applicable method:
On ESXi:
- Enter the Technical Support Mode.
- Restart slpd by running the /etc/init.d/slpd restart command.
- Restart sfcbd-watchdog by running the /etc/init.d/sfcbd-watchdog restart command.
Restart management agents on the Direct Console User Interface (DCUI). This restarts other agents on the host in addition to the ones impacted by this defect and might be more disruptive.
On ESX: In the ESX service console, run the following commands:
/etc/init.d/slpd restart
/etc/init.d/sfcbd-watchdog restart
Server Configuration
-
Host profiles do not capture or duplicate physical NIC duplex information
When you create a new host profile, the physical NIC duplex information is not captured. This is the intended behavior. Therefore, when the reference host's profile is used to configure other hosts, the operation negotiates the duplex configuration on a per physical NIC basis. This provides you with the capability to generically handle hosts with a variety of physical NIC capabilities.
Workaround: To set the physical NIC duplex value uniformly across NICs and hosts that are to be configured using the reference host profile, modify the host profile after it is created and reapply the parameters.
To edit the profile, follow the steps below.
- On the vSphere Client Home page, click Host Profiles.
- Select the host profile in the inventory list, then click the Summary tab and click Edit Profile.
- Select Network configuration > Physical NIC configuration > Edit.
- Select Fixed physical NIC configuration in the drop-down menu and enter the speed and duplex information.
Storage
-
Adding a QLogic iSCSI adapter to an ESX/ESXi system fails if an existing target with the same name but a different IP address exists
Adding a static target for QLogic hardware iSCSI adapter fails if there is existing target with same iSCSI name,
even if the IP address is different.
You can add a QLogic iSCSI adapter to an ESX/ESXi system only with a unique iSCSI name for a target, not the combination of IP and iSCSI name. In addition, the driver and firmware do not support multiple sessions to the same storage end point.
Workaround: None. Do not use the same iSCSI name when you add targets.
-
On rare occasions, after repeated SAN path failovers, operations that involve VMFS changes might fail for all ESX/ESXi hosts accessing a particular LUN
On rare occasions, after repeated path failovers to a particular SAN LUN, attempts to perform such operations as VMFS datastore creation, vMotion, and so on might fail on all ESX/ESXi hosts accessing this LUN. The following warnings appear in the log files of all hosts:
I/O failed due to too many reservation conflicts.
Reservation error: SCSI reservation conflict
If you see the reservation conflict messages on all hosts accessing the LUN, this indicates that the problem is caused by the SCSI reservations for the LUN that are not completely cleaned up.
Workaround: Run the following LUN reset command from any system in the cluster to remove the SCSI reservation:
vmkfstools -L lunreset /vmfs/devices/disks/<device_name>
-
NAS datastores report incorrect available space
When you view the available space for an ESX/ESXi host by using the df (ESXi) or vdf (ESX) command in the host service console, the space reported for ESX/ESXi NAS datastores is free space, not available space. The space reported for NFS volumes in the Free column when you select Storage > Datastores on the vSphere Client Configuration tab, also reports free space, and not the available space. In both cases, free space can be different from available space.
ESX file systems do not distinguish between free blocks and available blocks, but always report free blocks for both block types (specifically, f_bfree and f_bavail fields of struct statfs). For NFS volumes, free blocks and available might can be different.
Workaround: You can check NFS servers to get correct information regarding available space. No workarounds are available for ESX/ESXi.
-
Harmless warning messages concerning region conflicts are logged in the VMkernel logs for some IBM servers
When the SATA/IDE controller works in legacy PCI mode in the PCI config space, an error message similar to the following might appear in the VMkernel logs:
WARNING: vmklinux26: __request_region: This message has repeated 1 times: Region conflict @ 0x0 => 0x3
Workaround: Such error messages are harmless and can be safely ignored.
Supported Hardware
-
No CIM indication alerts are received when the power supply cable and power supply unit are reinserted into HP servers
No new SEL(IML) entries are created for power supply cable and power supply unit reinsertion into HP servers when recovering a failed power supply. As a result, no CIM indication alerts are generated for these events.
Workaround: None
-
Slow performance during virtual machine power-On or disk I/O on ESX/ESXi on the HP G6 Platform with P410i or P410 Smart Array Controller
Some of these hosts might show slow performance during virtual machine power on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to /var/log/messages .
Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL!
Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL!
Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060600) to NMP device
"naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
Mar 25 17:39:26 0 0x0 0x0.
Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)WARNING: NMP: nmp_DeviceRetryCommand: Device
"naa.600508b1001030304643453441300100": awaiting fast path state update for failoverwith I/O blocked. No prior reservation
exists on the device.
Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060700) to NMP device
"naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
Mar 25 17:39:26 0 0x0 0x0.
Workaround: Install the HP 256MB P-series Cache Upgrade module.
-
On certain versions of vSphere Client, the battery status might be incorrectly listed as an alert
In vSphere Client from the Hardware Status tab, when the battery is in its learn cycle, the battery status provides an alert message indicating that the health state of the battery is not good. However, the battery level is actually fine.
Workaround: None.
-
A "Detected Tx Hang" message appears in the VMkernel log
Under a heavy load, due to hardware errata, some variants of e1000 NICs might lock up. ESX/ESXi detects the issue and automatically resets the card. This issue is related to Tx packets, TCP workloads, and TCP Segmentation Offloading (TSO).
Workaround: You can disable TSO by setting the /adv/Net/UseHwTSO option to 0 in the esx.conf file.
-
Event messages from StoreLib of IR cards show incorrect timestamp
The IndicationTime in the event messages from the StoreLib shows incorrect timestamp for LSI 1078 and 1068E Integrated RAID (IR) controllers.
Upgrade and Installation
-
vSphere Client installation might fail with Error 1603 if you do not have an active Internet connection
You can install the vSphere Client in two ways: from the vCenter Server media or by clicking a link on the ESX, ESXi, or vCenter Server Welcome screen. The installer on the vCenter Server media (.iso file or .zip file) is self-contained, including a full .NET installer in addition to the vSphere Client installer. The installer called through the Welcome screen includes a vSphere Client installer that makes a call to the Web to get .NET installer components.
If you do not have an Internet connection, the second vSphere Client installation method will fail with Error 1603 unless you already have .NET 3.0 SP1 installed on your system.
Workaround: Establish an Internet connection before attempting the download, install the vSphere Client from the vCenter Server media, or install .NET 3.0 SP1 before clicking the link on the Welcome screen.
-
The vCenter Server system's Database Upgrade wizard might overestimate the disk space requirement during an upgrade from VirtualCenter 2.0.x to vCenter Server 4.0
During the upgrade of VirtualCenter 2.0.x to vCenter Server 4.0, the Database Upgrade wizard can show an incorrect value in the database disk space estimation. The estimation shown is typically higher than the actual space required.
Workaround: None
-
If SQL Native Client is already installed, you cannot install vCenter with the bundled SQL Server 2005 Express database
When you are installing vCenter with the bundled SQL Server 2005 Express database, if SQL Native Client is already installed, the installation fails with the following error message:
An Installation package for the product Microsoft SQL Native Client cannot be found. Try the installation using a valid copy of the installation package sqlcli.msi .
Workaround: Uninstall SQL Native Client if it is not used by another application. Then, install vCenter with the bundled SQL Server 2005 Express database.
-
vSphere Client 4.0 download times out with an error message when you connect VI Client 2.0.x on a Windows 2003 machine to vCenter Server or an ESX/ESXi host
If you connect a VI Client 2.0.x instance to vCenter Server 4.0 or an ESX/ESXi 4.0 host, vSphere Client 4.0 is automatically downloaded onto the Windows machine where the VI Client resides. This operation relies on Internet Explorer to perform this download. By default, Internet Explorer on Windows 2003 systems blocks the download if the VI Client instance is VI Client 2.0.x.
Workaround: In Internet Explorer, select Tools > Internet Options > Advanced and uncheck the option Do not save encrypted pages to disk. Alternatively, download and install vSphere Client 4.0 manually from vCenter Server 4.0 or the ESX/ESXi 4.0 host.
-
vCenter Server database upgrade fails for Oracle 10gR2 database with certain user privileges
If you upgrade VirtualCenter Server 2.x to vCenter Server version 4.0 and you have connect, create view, create any sequence, create any table, and execute on dbms_lock privileges on the database (Oracle 10gR2), the database upgrade fails. The VCDatabaseUpgrade.log file shows following error:
Error: Failed to execute SQL procedure. Got exception: ERROR [HY000] [Oracle][ODBC][Ora]ORA-01536: space quota exceeded for tablespace 'USERS'
Workaround: As database administrator, enlarge the user tablespace or grant the unlimited tablespace privilege to the user who performs the upgrade.
-
vCenter Server installation fails on Windows Server 2008 when using a nonsystem user account
When you specify a non-system user during installation, vCenter Server installation fails with the following error message:
Failure to create vCenter repository
Workaround: On the system where vCenter Server is being installed, turn off the User Account Control option under Control Panel > User Accounts before you install vCenter Server. Specify the non-system user during vCenter Server installation.
-
Cannot log in to VirtualCenter Server 2.5 after installing VI Client 2.0.x, 2.5, and vSphere Client 4.0 and then uninstalling VI Client 2.0.x on a Windows Vista system
After you uninstall the VI Client 2.0.x on a Windows Vista machine where the VI Client 2.0.x, 2.5, and the vSphere Client 4.0 coexist, you cannot log in to vCenter Server 2.5. Log-in fails with the following message:
Class not registered(Exception from HRESULT:0x80040154(REGDB_E_CLASSNOTREG))
Workaround: Disable the User Account Control setting on the system where VI Client 2.0.x, 2.5, and vSphere Client 4.0 coexist,or uninstall and reinstall VI Client 2.5.
-
The ESX/ESXi installer lists local SAS storage devices in the Remote Storage section
When displaying storage locations for ESX or ESXi Installable to be installed on, the installer lists a local SAS storage device in the Remote Storage section. This happens because ESX/ESXi cannot determine whether the SAS storage device is local or remote and always treats it as remote.
Workaround: None
-
If vSphere Host Update Utility loses its network connection to the ESX host, the host upgrade might not work
If you use vSphere Host Update Utility to perform an ESX/ESXi host upgrade and the utility loses its network connection to the host, the host might not be completely upgraded. When this happens, the utility might stop, or you might see the following error message:
Failed to run compatibility check on the host.
Workaround: Close the utility, fix the network connection, restart the utility, and rerun the upgrade.
-
When vSphere Client 4.0 and VI Client 2.5 are installed on the same system, depending on the order in which you uninstall the applications, the desktop shortcut is not updated
If you install the vSphere Client 4.0 application on a system that includes an instance of the VI Client 2.5 application, only the vSphere Client 4.0 desktop shortcut appears on the desktop. You can launch both applications from the shortcut.
However, if you uninstall the vSphere Client 4.0 application but do not uninstall the VI Client 2.5 application, the vSphere Client 4.0 desktop shortcut remains on the system. You can continue to use the shortcut to log in to VI Client 2.5, but if you attempt to log in to vSphere Client 4.0, you are prompted to download the application.
Workaround: Perform one of the following steps:
- If you uninstall only the vSphere Client 4.0 application, rename the desktop shortcut or reinstall the VI Client 2.5 application so that the link correctly reflects the installed client.
- If you uninstall both applications, remove any nonworking shortcuts.
-
vCenter Server installation on Windows Server 2008 with a remote SQL Server database fails in some circumstances
If you install vCenter Server on Windows 2008, using a remote SQL Server database with Windows authentication for SQL Server, and a domain user for the DSN that is different from the vCenter Server system login, the installation does not proceed and the installer displays the following error message:
25003.Setup failed to create the vCenter repository
Workaround: In these circumstances, use the same login credentials for vCenter Server and for the SQL Server DSN.
-
The Next run time value for some scheduled tasks is not preserved after you upgrade from VirtualCenter 2.0.2.x to vCenter Server 4.0
If you upgrade from VirtualCenter 2.0.2.x to vCenter Server 4.0, the Next run time value for some scheduled tasks might not be preserved and the tasks might run unexpectedly. For example, if a task is scheduled to run at 10:00 am every day, it might run at 11:30 am after the upgrade.
This problem occurs because of differences in the way that VirtualCenter 2.0.2.x and vCenter Server 4.0 calculate the next run time. You see this behavior only when the following conditions exist:
- You have scheduled tasks, for which you edited the run time after the tasks were initially scheduled so that they now have a different
Next run time.
- The newly scheduled
Next run time has not yet occurred.
Workaround: Perform the following steps:
- Wait for the tasks to run at their scheduled
Next run time before upgrading.
- After you upgrade from vCenter 2.0.x to vCenter Server 4.0, edit and save the scheduled task. This process recalculates the
Next run time of the task to its correct value.
-
Default alarms new with vCenter Server 4.0 are not added to the system during upgrade
When upgrading to vCenter Server 4.0, the default alarms that are new with 4.0 are not added to the system. The following is a list of missing default alarms:
HostConnectionStateAlarm
VmFaultToleranceLatencyStatusAlarm
HostEsxCosSwapAlarm
VmDiskLatencyAlarm
DatastoreDiskUsageAlarm
LicenseNonComplianceAlarm
VmTimedoutStartingSecondaryAlarm
VmNoCompatibleHostForSecondaryAlarm
HostErrorAlarm
VmErrorAlarm
HostConnectivityAlarm
NetworkConnectivityAlarm
StorageConnectivityAlarm
MigrationErrorAlarm
ExitStandbyErrorAlarm
VmHighAvailabilityError
HighAvailabilityError
LicenseError
HealthStatusChangedAlarm
VmFaultToleranceStateChangedAlarm
Workaround: See VMware Knowledge Base article 1010399 for information on a script that adds the new default alarms to the system.
-
Virtual machine hardware upgrades from version 4 to version 7 cause Solaris guests lose their network settings
Virtual machine hardware upgrades from version 4 to version 7 changes the PCI bus location of virtual network adapters in guests. Solaris does not detect the adapters and changes the numbering of its network interfaces (for example, e1000g0 becomes e1000g1). This numbering change occurs because Solaris IP settings are associated with interface names, so it appears that the network settings have been lost and the guest is likely not to have proper connectivity.
Workaround: Determine the new interface names after the virtual machine hardware upgrade by using the prtconf -D command, and then rename all the old configuration files to their new names. For example, e1000g0 might become e1000g1, so every /etc/*e1000g0 file should be renamed to its /etc/*e1000g1 equivalent.
-
The vCenter Server installer cannot detect service ports if the services are not running
When you install vCenter Server and accept the default ports, if those ports are being used by services that are not running, the installer cannot validate the ports. The installation fails, and an error message might appear, depending on which port is in use.
This problem does not affect IIS services. IIS services are correctly validated, regardless of whether the services are running.
Workaround: Verify which ports are being used for services that are not running before beginning the installation and avoid using those ports.
-
Updated: Upgrades where two versions of ESXi co-exist on the same machine fail
Two versions of ESXi on the same machine is not supported. You must remove one of the versions. The following workarounds apply to the possible combinations of two ESXi versions on the same machine.
Workarounds:
- If ESXi Embedded and ESXi Installable are on the same machine and you choose to remove ESXi Installable and only use ESXi Embedded, follow the steps below.
- Make sure you can boot the machine from the ESX Embedded USB flash device.
- Copy the virtual machines from the ESXi Installable VMFS datastore to the ESXi Embedded VMFS datastore.
This is a best practice to prevent loss of data.
- Remove all partitions except for the VMFS partition on the disk with ESXi Installable installed.
- Reboot the machine and configure the boot setting to boot from the USB flash device.
- If ESXi Embedded and ESXi Installable are on the same machine and you choose to remove ESXi Embedded and only use ESXi Installable, follow the steps below.
- Boot the system from ESXi Installable.
- Reboot the machine and configure the boot setting to boot from the hard disk where you installed ESXi rather than the USB disk.
- If you can remove the ESXi Embedded USB device, remove it. If the USB device is internal, clear or overwrite the USB partitions.
- If two versions of ESXi Embedded or two versions of ESXi Installable are on the same machine, remove one of the installations.
-
The vihostupdate command can fail on ESXi 4.0 hosts for which the scratch directory is not configured
Depending on the configuration of the scratch directory, bundle sizes for example the size of the ESXi 4.0 Update 1
bundle, might be too large for an ESXi 4.0 host. In such cases, when you perform an installation with vihostupdate, if the scratch directory is not configured to use disk-backed storage the installation fails.
Workaround: You can change the configuration of the scratch directory by using the VMware vSphere Client or the VMware Update Manager. The following steps illustrate the use of the client.
- Check the configuration of the scratch directory.
The following is the navigation path from vSphere Client:
Configuration>Advanced Settings>ScratchConfig
For an ESXi host the following applies:
- When the scratch directory is set to /tmp/scratch, the size of the bundle is limited. For example, you can apply a patch bundle of 163 MB, but you cannot apply an update bundle, such as an ESXi 4.0 update bundle of 281 MB.
- When the scratch directory is set to the VMFS volume path, /<vmfs-volume-path>, you can apply bundles as large as an ESXi 4.0 bundle of 281 MB.
- Change the scratch directory to the appropriate setting using vSphere Client.
The following is the navigation path from vSphere Client: Configuration>Advanced Settings>ScratchConfig.
- Reboot the ESXi host for the edited settings to take effect.
- Issue the vihostupdate.pl command to install the bundle.
For example, you can issue a command such as the following, replacing the place holders as appropriate:
vihostupdate.pl --server <ServerIPAddressPlaceHolder> --username root --password <PasswordPlaceHolder> --bundle http://<URLplaceHolder>.zip --install
-
Patch installation using vihostupdate fails on ESXi hosts for file sizes above 256MB
Patch installation fails on an ESXi 4.0 host if you install using vihostupdate command on a server which does not have a scratch directory configured, and the downloaded file size is above 256MB. The installation fails usually on a host machine which does not have an associated LUN, ESXi 4.0 installation on Fibre Channel, or Serial Attached SCSI (SAS) disk.
You should verify the scratch directory settings on the ESXi server and make sure that the scratch directory is configured. When the ESXi server boots initially, the system tries to auto configure the scratch directory. If storage is not available for the scratch directory, the scratch directory is not configured and points to a temporary directory.
Workaround
To work around the limitation on single file size, you should configure a scratch directory on a VMFS volume using the vSphere Client.
To configure the scratch directory:
- Connect to the host with vSphere Client.
- Select the host in the Inventory.
- Click the Configuration tab.
- Select Advanced Settings from the Software settings list.
- Find ScratchConfig in the parameters list and set the value for ScratchConfig.ConfiguredScratchLocation to a directory on a VMFS volume connected to the host.
- Click OK.
- Reboot the host machine to apply your changes to the host.
-
When ESXi 3.5 is upgraded to ESXi 4.0 Update 1, the esxupdate query command does not show the installed bulletins
Bulletins are installed as part of the upgrade from ESXi 3.5 to ESXi 4.0 Update 1. However, after the upgrade, the esxupdate query command does not list any bulletins.
Workaround: The issue does not affect the core functionality of the host. No workaround.
-
The WS-Management service is not started automatically on a host upgraded from ESXi 3.5.x to ESXi 4.0 Update 1
This upgrade can prevent the WS-Management (wsman) service from starting automatically as observed by issuing the wsman status command as such:
/etc/init.d/wsman status
Workaround:
- Start the wsman service from Tech Support Mode.
See KB 1003677 for information on using Tech Support Mode. The following serves as an example of how to start this service: /etc/init.d/wsman start
- Check the status of the wsman service to ensure that it is running.
For example: /etc/init.d/wsman status
- Rename the WS-management service entry from wsmand to wsman in the /etc/chkconfig.db file to preserve the change across reboot.
The following is an example of the full path to the file: /etc/init.d/wsman
vCenter Server and vSphere Client
-
The vSphere Client does not update sensors that are associated with physical events
The vSphere Client does not always update sensor status. Some events can trigger an update, such as a bad power supply or the removal of a redundant disk. Other events, such as chassis intrusion and fan removal, might not trigger an update to the sensor status.
Workaround: None
-
In the vSphere Client, clicking Close Tab on the Getting Started tab for certain objects (cluster, host, virtual machine) does not result in any action
In the vSphere Client, clicking Close Tab [x] on the Getting Started tab for certain objects (cluster, host, virtual machine) does not result in any action. This issue occurs only if the vSphere Client is running on a machine whose operating systems is configured to disable javascript.
Workaround: None
-
The Overview performance charts do not display when vCenter Server uses an Oracle database
You view the performance charts through the Overview view of the Performance tab. If your vCenter Server uses an Oracle database, the charts do not appear when you open this view. Instead, the following error message appears:
STATs Report service internal error
Message: STATs Report application initialization is not completed successfully.
This error occurs because VMware installs a placeholder file instead of the Oracle ojdbc5.jar file with the overview performance charts due to licensing constraints.
Workaround: Perform the following task to overwrite the placeholder Oracle ojdbc5.jar file with the actual file.
- Download the
ojdbc5.jar file from the Oracle Technology Network Web site.
- Overwrite the VMware placeholder file with the
ojdbc5.jar file you downloaded. By default, this file is installed in the C:\Program Files\VMware\Infrastructure\tomcat\lib directory.
- Restart vCenter Server Web Service.
-
Alarms with health status trigger conditions are not migrated to vSphere 4.0
The vSphere 4.0 alarm triggers functionality has been enhanced to contain additional alarm triggers for host health status. In the process, the generic Host Health State trigger was removed. As a result, alarms that contained this trigger are no longer available in vSphere 4.0.
Workaround:
Use the vSphere Client to recreate the alarms. You can use any of the following preconfigured VMware alarms to monitor host health state:
- Host battery status
- Host hardware fan status
- Host hardware power status
- Host hardware temperature status
- Host hardware system board status
- Host hardware voltage
- Host memory status
- Host processor status
- Host storage status
If the preconfigured alarms do not handle the health state you want to monitor, you can create a custom host alarm that uses the Hardware Health changed event trigger. You must manually define the conditions that trigger for this event alarm. In addition, you must manually set which action occurs when the alarm triggers.
Note: The preconfigured alarms already have default trigger conditions defined for them. You only need to set which action occurs when the alarm triggers.
-
Virtual machines disappear from the virtual switch diagram in the Networking View for host configuration
In the vSphere Client Networking tab for a host, virtual machines are represented in the virtual switch diagram. If you select another host and then return to the Networking tab of the first host, the virtual machines might disappear from the virtual switch diagram.
Workaround: Select a different view in the Configuration tab, such as Network Adapters, Storage, or Storage Adapters, and return to the Networking tab.
-
Starting or stopping the vctomcat Web service at the Windows command prompt might result in an error message
On Windows operating systems, if you use the net start and net stop commands to start and stop the vctomcat Web service, the following error message might appear:
The service is not responding to the control function.
More help is available by typing NET HELPMSG 2186.
Workaround: You can ignore this error message. If you want to stop the error message from occurring, modify the registry to increase the default timeout value for the service control manager (SCM).
For more information, see the following Microsoft KB article: http://support.microsoft.com/kb/922918.
-
The vc-support command uses a 64-bit DSN application and cannot gather data from the vCenter Server database
When you use the VMware cscript vc-support.wsf command to retrieve data from the vCenter Server database, the default Microsoft cscript.exe application is used. This application is configured to use a 64-bit DSN rather than a 32-bit DSN, which is required by the vCenter Server database. As a result, errors occur and you cannot retrieve the data.
Workaround: At a system prompt, run the vc-support.wsf command with the 32-bit DSN cscript.exe application:
%windir%\SysWOW64\cscript.exe vc-support.wsf
-
The vSphere Client Roles menu does not display role assignments for all vCenter Server systems in a Linked Mode group
When you create a role on a remote vCenter Server system in a Linked Mode group, the changes you make are propagated to all other vCenter Server systems in the group.
However, the role appears as assigned only on the systems that have permissions associated with the role. If you remove a role, the operation only checks the status
of the role on the currently selected vCenter Server system. However, it removes the role from all vCenter Server systems in the Linked Mode group without issuing a warning that the role might be in use on the other servers.
Workaround: Before you delete a role from vCenter Server system, ensure that the role is not being used across other vCenter Server systems. To see if a role is in use, go to the Roles view and use the navigation bar to select each vCenter Server system in the group. The role's usage is displayed for the selected vCenter Server system.
See vSphere Basic System Administration to learn about best practices for users and groups as well as information on setting roles for Linked Mode vCenter Server groups.
-
Joining a Linked mode group after installation is unsuccessful if UAC is enabled on Windows Server 2008
When User Account Control (UAC) is enabled on Windows Server 2008 32- or 64-bit operating systems and you try to join a machine to a Linked Mode group on a system that is
already running vCenter Server, the link completes without any errors, but it is unsuccessful. Only one vCenter Server appears in the inventory list.
Workaround: Complete the following procedures.
After installation, perform the following steps to turn off UAC before joining a Linked Mode group:
- Select Start>Setting>Control Panel>User Accounts to open the User Accounts dialog box.
- Click Turn User Account control on or off.
- Deselect User Account Control (UAC) to help protect your computer and click OK.
- Reboot the machine when prompted.
Start the Linked Mode configuration process as follows:
- Select Start > All Programs > VMware > vCenter Server Linked Mode Configuration.
- Click Next.
- Select Modify Linked-Mode configuration and click Next.
- Click Join this vCenter Server instance to an existing Linked-Mode group or another instance and click Next.
- Enter the server name and LDAP port information and click Next.
- Click Continue to complete the installation.
- Click Finish to end the linking process.
Log in to one of the vCenter Servers and verify that the servers are linked. After the vCenter Servers are linked, turn on UAC as follows:
- Select Start > Setting > Control Panel > User Accounts to open the User Accounts dialog box.
- Click Turn User Account control on or off.
- Select User Account Control (UAC) to help protect your computer and click OK.
- Reboot the machine when prompted.
-
The vCenter Server Resource Manager does not update the host tree after a migration in a cluster that has neither DRS nor HA enabled
If you use vMotion to power on or migrate a virtual machine on clusters that have neither HA nor DRS enabled, the operations might fail with one of the following messages:
The host does not have sufficient memory resources to satisfy the reservation.
The host does not have sufficient CPU resources to satisfy the reservation
This problem occurs even when the host appears to have sufficient capacity available and occurs only if both hosts are in the same cluster.
When vMotion is used to migrate a virtual machine to a host or power it on under the new host, vCenter Server assesses whether the host has sufficient unreserved resources to meet the requirements
of the virtual machine. However, the internal data structures used for this assessment are not updated when you use vMotion to migrate a virtual machine from a source host to a destination host in a cluster after having
powered on the virtual machine on the source host. Any future admission control calculations for the source host consider the reservation of this virtual machine, even though it is no longer running on the host.
This behavior might cause power-on and vMotion operations that target the source host to fail.
Note: These failures are reported as the following faults in the log file:
vim.fault.InsufficientHostCpuCapacityFault
vim.fault.InsufficientHostMemoryCpuCapacityFault
Workaround: Reconfigure the virtual machine's reservation, or power the machine on or off. These actions force vCenter Server to update its internal data structures.
-
Networking problems and errors might occur when analyzing machines with VMware Guided Consolidation
When a large number of machines are under analysis for Guided Consolidation, the vCenter Collector Provider Services component of Guided Consolidation might be mistaken for a virus or worm by the operating system on which the Guided Consolidation functionality is installed. This occurs when the analysis operation encounters a large number of machines that have invalid IP addresses or name resolution issues. As a result, a bottleneck occurs in the network and error messages appear.
Workaround: Do not add machines for analysis if they are unreachable. If you add machines by name, make sure the NetBIOS name is resolvable and reachable. If you add machines by IP address, make sure the IP address is static.
-
When you run the Linked Mode Configuration Wizard after linking a vCenter Server system to a group in a pure IPv6 environment, there is no option to isolate the vCenter Server system from Linked Mode
If vCenter Server is linked in a pure IPv6 environment (Windows 2008 x32 or Windows 2008 x64) and you invoke the Linked Mode Configuration Wizard, the Join vCenter Server instance to existing Linked Mode group or another instance option is enabled. There is no option to isolate the vCenter Server system from Linked Mode.
Workaround: Configure the vCenter Server system with mixed mode (IPv4 and IPv6) and invoke the Linked Mode Configuration Wizard: Start > VMware > Linked Mode Configuration Wizard. The Join option is disabled and the Isolate this vCenter server instance from linked mode group option is enabled.
Note: If you configure a vCenter Server system with mixed mode, the domain controller must also be in mixed mode. If you do not want to use mixed mode in your IPv6 environment, you must uninstall vCenter Server to isolate the system from Linked Mode.
-
Virtual machine templates stored on shared storage become unavailable after Distributed Power Management (DPM) puts a host in standby mode or when a host is put in maintenance mode
The vSphere Client associates virtual machine templates with a specific host. If the host storing the virtual machine templates is put into standby mode by DPM or into maintenance mode, the templates appear disabled in the vSphere Client. This behavior occurs even if the templates are stored on shared storage.
Workaround: Disable DPM on the host that stores the virtual machine templates. When the host is in maintenance mode, use the Datastore browser on another host that is not in maintenance or standby mode and also has access to the datastore on which the templates are stored to find the virtual machine templates. Then you can provision virtual machines using those templates.
-
You might encounter a LibXML DLL module load error when you fresh install vSphere CLI on some Windows platforms, such as Windows Vista Enterprise SP1 32bit for the first time
-
New: Incorrect links on ESX and ESXi Welcome page
The download links under vSphere Remote Command Line section, vSphere Web Services SDK section, and the links to download vSphere 4 documentation and VMware vCenter on the Welcome page of ESX and ESXi are wrongly mapped.
Workaround: Download the products from the VMware Web site.
-
On Nexus 1000v, distributed power management cannot put a host into standby
If a host does not have Integrated Lights-Out (iLO) or Intelligent Platform Management Interface (IPMI) support for distributed power management (DPM), that host can still use DPM provided all the physical NICs of the host that are added to Nexus 1000V DVS have Wake-on-LAN support. If even one of the physical NICs is not Wake-on-LAN supported, the host cannot be put into standby by DPM.
Workaround: None.
Virtual Machine Management
-
Custom scripts assigned in vmware-toolbox for suspend power event do not run when you suspend the virtual machine from the vSphere Client
If you have assigned a custom script to the suspend power event in the Script tab of vmware-toolbox and you have configured the virtual machine to run VMware Tools scripts when you perform the suspend scripts, then the custom scripts are not run when you suspend the virtual machine from the vSphere Client.
Workaround: None
-
Automatic VMware Tools upgrade on guest power on reboots the guest automatically without issuing a reboot notification
If you select to automatically update VMware Tools on a Windows Vista or Windows 2008 guest operating system, when the operating system powers on, VMware Tools is updated and the guest operating system automatically reboots without issuing a reboot notification message.
Workaround: None
-
An IDE hard disk added to a hardware version 7 virtual machine is defined as Hard Disk 1 even if a SCSI hard disk is already present
If you have a hardware version 7 virtual machine with a SCSI disk already attached as Hard Disk 1 and you add an IDE disk, the virtual machine alters the disk numbering. The IDE disk is defined as Hard Disk 1 and the SCSI disk is changed to Hard Disk 2.
Workaround: None. However, if you decide to delete one of the disks, do not rely exclusively on the disk number. Instead, verify the disk type to ensure that you are deleting the correct disk.
-
Reverting to snapshot might not work if you cold migrate a virtual machine with a snapshot from an ESX/ESXi 3.5 host to an ESX/ESXi 4.0 host
You can cold migrate a virtual machine with snapshots from an ESX/ESXi 3.5 host to ESX/ESXi 4.0 host. However, reverting to a snapshot after migration might not work.
Workaround: None
-
The vCenter Server fails when the delta disk depth of a linked virtual machine clone is greater than the supported depth of 32
If the delta disk depth of a linked virtual machine clone is greater than the supported depth of 32, the vCenter Server fails and the following error message appears:
Win32 exception: Stack overflow
In such instances, you cannot restart the vCenter Server unless you remove the virtual machine from the host or clean the vCenter Server database. Consider removing the virtual machine from the host rather than cleaning the vCenter Server database, because it is much safer.
Workaround: Perform the following steps:
- Log in to the vSphere Client on the host.
- Display the virtual machine clone in the inventory.
- Right-click the virtual machine and choose Delete from Disk.
- Restart the vCenter Server.
Note: After you restart the vCenter Server, if the virtual machine is listed in the vSphere Client inventory and the Remove from Inventory option is disabled in the virtual machine context menu, you must manually remove the virtual machine entry from the vCenter database.
-
New: Creating a new SCSI disk in a virtual machine can result in an inaccurate error message
When you create a new SCSI disk in a virtual machine and you set the SCSI bus to virtual, an error message is issued with the following line:
Please verify that the virtual disk was created using the "thick" option.
However, thick by iteself is not an option. The option should be eagerzeroedthick.
Workaround: Using the command line, create the SCSI disk with the vmkfstools command and the eagerzeroedthick option.
-
The Installation Boot options for a virtual machine are not exported to OVF
When you create an OVF package from a virtual machine that has the Installation Boot option enabled, this option is ignored during export. As a result, the OVF descriptor is missing the InstallSection element, which provides information about the installation process. When you deploy an OVF package, the InstallSection element is parsed correctly.
Workaround: After exporting the virtual machine to OVF, manually create the InstallSection parameters in the OVF descriptor. If a manifest (.mf ) file is present, you must regenerate it after you modify the OVF descriptor.
Example: <InstallSection ovf:initialBootStopDelay="300">
<Info>Specifies that an install boot is needed.</Info>
</InstallSection>
The inclusion of the InstallSection parameters in the descriptor informs the deployment process that an install boot is required to complete deployment. The ovf:initialBootStopDelay attribute specifies the boot delay.
See the OVF specification for details.
-
New: Following the suspension and resumption of a virtual machine, disabling of the VMXNET3 adapter can fail
Between suspension and resumption, if the network connection goes into an undefined state, for example, if the port-group name changes, then the virtual network device cannot update the new network connection to the driver. This state prevents the VMXNET3 adapter from being disabled, uninstalled, or updated.
Workaround: Reconnect the adapter to a valid port group.
vMotion and Storage vMotion
-
Reverting to a snapshot might fail after reconfiguring and relocating the virtual machine
If you reconfigure the properties of a virtual machine and move it to a different host after you have taken a snapshot of it, reverting to the snapshot of that virtual machine might fail.
Workaround: Avoid moving virtual machines with snapshots to hosts that are very different (for example, different version, different CPU type, etc.)
-
Using Storage vMotion to migrate a virtual machine with many disks might time out
A virtual machine with many virtual disks might be unable to complete a migration with Storage vMotion. The Storage vMotion process requires time to open, close, and process disks during the final copy phase. Storage vMotion migrations of virtual machines with many disks might time out because of this per-disk overhead.
Workaround: Increase the Storage vMotion fsr.maxSwitchoverSeconds setting in the virtual machine configuration file to a larger value. The default value is 100 seconds. Alternatively, at the time of the Storage vMotion migration, avoid running a large number of provisioning operations, migrations, power on, or power off operations on the same datastores the Storage vMotion migration is using.
-
Storage vMotion of NFS volume may be overriden by NFS server disk format
When you use Storage vMotion to migrate a virtual disk to an NFS volume or perform other virtual machine provisioning that involves NFS volumes, the disk format is determined by the NFS server where the destination NFS volume resides. This overrides any selection you made in the Disk Format menu.
Workaround: None
-
If ESX/ESXi hosts fail or reboot during Storage vMotion, the operation can fail and Virtual machines might become orphaned
If hosts fail or reboot during Storage vMotion, the vMotion operation can fail. The destination virtual machine's virtual disks might show up as orphaned in the vSphere inventory after the host reboots. Typically, the virtual machine's state is preserved before the host shuts down.
If the virtual machine does not show up in an orphaned state, check to see if the destination VMDK files exist.
Workaround: You can manually delete the orphaned destination virtual machine from the vSphere inventory. Locate and delete any remaining orphaned destination disks if they exist on the datastore.
-
Storage vMotion of thick to thin virtual disk fails
Migrating virtual machines configured with the VMFS maximum filesystem limit report the following error:
File[vol] <vmpath> is larger than the maximum size supported by datastore <destination datastore> .
Workaround: None. Do not configure a virtual machine disk to its maximum volume limit if you plan to migrate the virtual machine.
VMware High Availability and Fault Tolerance
-
Failover to VMware FT secondary virtual machine produces error message on host client
When VMware Fault Tolerance is failing over to a secondary virtual machine, if the host chosen for the secondary virtual machine has recently booted, the host client sees this attempt as failing and displays the following error message:
Login failed due to a bad username or password .
This error message is seen because the host has recently booted and it is possible that it has not yet received an SSL thumbprint from the vCenter Server. After the thumbprint is pushed out to the host, the failover succeeds. This condition is likely to occur only if all hosts in an FT-enabled cluster have failed, causing the host with the secondary virtual machine to be freshly booted.
Workaround: None. The failover succeeds after a few attempts.
-
Changing the system time on an ESX/ESXi host produces a VMware HA agent error
If you change the system time on an ESX/ESXi host, after a short time interval, the following HA agent error appears:
HA agent on <server> in <cluster> in <data center> has an error .
This error is displayed in both the event log and the host's Summary tab in the vSphere Client.
Workaround: Correct the host's system time and then restart vpxa by running the service vmware-vpxa restart command.
-
Upgrading from an ESX/ESXi 3.x host to an ESX/ESXi 4.0 host results in a successful upgrade, but VMware HA reconfiguration might fail
When you use vCenter Update Manager 4.0 to upgrade an ESX/ESXi 3.x host to ESX/ESXi 4.0, if the host is part of an HA or DRS cluster, the upgrade succeeds and the host is reconnected to vCenter Server, but HA reconfiguration might fail. The following error message displays on the host Summary tab:
HA agent has an error : cmd addnode failed for primary node: Internal AAM Error - agent could not start. : Unknown HA error .
Workaround: Manually reconfigure HA by right-clicking the host and selecting Reconfigure for VMware HA.
-
VMware Fault Tolerance does not support IPv6 addressing
If the VMkernel NICs for Fault Tolerance (FT) logging or vMotion are assigned IPv6 addresses, enabling Fault Tolerance on virtual machines fails.
Workaround: Configure the VMkernel NICs using the IPv4 addressing.
-
Hot-plugging of devices is not supported when FT is disabled on virtual machines
The hot-plugging feature is not supported on virtual machines when VMware Fault Tolerance is enabled or disabled on the virtual machines. You must Turn off Fault Tolerance temporarily before you hot-plug a device. After hot-plugging you can turn-on the FT. But after a hot-removal of a device, you should reboot the virtual machine to turn-on the FT.
Top of Page
|