Release Notes Main Page | Introduction | What's New and Improved | Known Issues | Resolved Issues

Known Issues

The contents of the Known Issues with this release are as follows:

Installation and Licensing

Upgrade and Security

Consolidated Backup

Server Configuration

VI Client and Web Access

Virtual Machine Management

VirtualCenter Services

Miscellaneous

  

Installation and Licensing

ESX Server and VirtualCenter Installation

  • Graphical installer might fail to load using x445 and IBM RSA II interface

    Description: Graphical installer might fail to load using x445 and IBM RSA II interface

    When attempting to load the ESX Server installer in graphical mode with the RSA II interface, the installer might crash with an error which looks like:

    1...2...3...4...5.... X server started successfully
    (mini-wm:110): Gtk-WARNING **: cannot open display: :1

    Workaround: Use the text installation mode by specifying "ESX Server text" at the boot prompt.

  • Cannot Load Video or Mouse

    Description: A Rack Monitor that IBM sells PN 23K4802 MT17231ux requires that VMware (any version) be installed via text mode. This is a Customer Experience issue. VMware probes the monitor set as "Smart Cable", but can't load video or mouse. Text mode works fine. This is across all systems that attach to this rack mounted video solution

    Workaround: None

  • Cannot Log On to VirtualCenter Server Using VI Client after VirtualCenter Server Upgrade or Installation

    Description: After you install or upgrade to VirtualCenter 2.0, the VirtualCenter service does not start properly, and you cannot log on to the VirtualCenter server using the VI Client. Checking the status of the VirtualCenter service by choosing Start > Programs > Administrative Tools > Services shows the VirtualCenter service as Stopped.

    Workaround: This problem is caused by a port conflict between the VirtualCenter service and another application. In addition to listening for VI Client connections on port 902, VirtualCenter also uses:

    • Port 80 for the VirtualCenter WebServer components used by VI Web Aceess
    • Port 443 for the VirtualCenter SDK web service

    If applications such as Microsoft IIS have conflicts on these ports, one solution is to stop the conflicting service, and start the VirtualCenter service as follows:

    1. Choose Start > Programs > Administrative Tools > Services to display the Services control panel.
    2. Right-click on the conflicting service, and choose Stop from the dropdown menu.
    3. Right-click on the VirtualCenter service, and choose Start from the dropdown menu.

    If you must run VirtualCenter and the conflicting service simultaneously, another solution is to change the default ports used by the VirtualCenter components:

    1. Uninstall VirtualCenter server.
    2. Reinstall VirtualCenter server, specifying different default ports in the installation wizard to avoid the conflicts.

  • Installing ESX Server 3.0 on IBM e326 Causes Keyboard to Become Unresponsive

    Description: There are two possible symptoms:

    • USB keyboard/mouse do not function during installation,
    • USB devices other than keyboard/mouse do not function when running ESX Server (even in service console mode)

    This problem has two distinct manifestations, one at install time and one after installation. In the case of the former, all USB devices including keyboard and mouse do not function during installation. This occurs after the initial selection normal or text mode install (e.g., after hitting the enter key or typing ESX Server text on the initial installation menu). In the after installation manifestation the USB stack does not load with the result that the USB keyboard and mouse function in USB legacy mode but all other USB devices are non-functional.

    Workaround: There are two distinct workarounds, one for installation and one for after installation. Both are required.

    For installation, specify noapic at the initial installation menu. To do this for graphical installation, rather than just pressing the enter key, type esx noapic and press the enter key. For text installation, type ESX Server noapic text and press the enter key.

    After installation USB devices other than keyboard and mouse will not be functional.

    1. Edit the file /boot/grub/grub.conf (save a backup of the file first) and replace the option noapic with vmnixACPI at the end of each of the lines that begin with the word kernel. The grub.conf file can only be accessed by the root account.
    2. When you have finished your edits save the /boot/grub/grub.conf file.
    3. Next edit the file /etc/modules.conf (again, save a backup copy of the file first). Add the following lines:
      • alias usb-controller usb-ohci
      • alias usb-controller1 ehci-hcd
    4. Save the /etc/modules.conf file.
    5. Reboot.

  • ESX Server Installer Unable to Display on Some Monitors

    Description: When booting from ESX Server installation media, the video display on some systems might go blank or report that it is unable to display at the current resolution. In particular, we have had field reports that this happens with some Kvirtual machinesystems.

    Workaround: At the ESX Server installer mode selection screen, type "ESX Server vga=788" to change the VGA display resolution.

  • ESX Server Installation Errors when Using Remote CD-ROM in IBM Management Module or RSA II

    Description: When using the remote CD-ROM feature provided in IBM Management Module or RSA II Remote Control during ESX Server 3.0 installations, the installation might fail with an error. A SCSI CD-ROM error in the kernel log might be a symptom of this condition.

    To view the kernel log, do the following:

    1. At the ESX Server welcome screen, press <Ctrl>-<Alt>-<F2>
    2. Type "dmesg | less" to display the kernel log.

    Workaround: Perform the ESX Server 3.0 installation using a method that does not require the remote CD-ROM, such as using the physical CD-ROM drive in the server, or using a network installation.

  • Display Goes Blank When Installing Using Remote Interface on IBM and Intel Blade Servers

    Description: When booting from the ESX Server 3.0 installation media using the remote interface on some Intel or IBM blade servers, the display window goes blank or displays an error message, "eServer, No Video Available". Note that this problem does not occur when installing locally, only when using the remote interface.

    Workaround: For Intel blade servers, such as the SBX 82, type ESX Server vga=788 nofb at the ESX Server installer mode selection screen to start the graphical installer, or type ESX Server text vga=788 to start the text installer.

    For IBM blade servers, such as the HS20, type ESX Server vga=788 at the ESX Server installer mode selection screen to start the graphical installer, or type ESX Server text vga=788 to start the text installer.

  • Cannot Install VirtualCenter Server with MSDE as Database

    Description: If there is already a dedicated MSDE_VC database instance installed, installation of VirtualCenter Server fails when you select the option to install and use a dedicated MSDE server instance.

    If the MSDE_VC is already installed and has the database created under it, the older database is not removed or overwritten.

    Workaround: Remove the MSDE_VC database:

    1. Choose Start > Settings > Control Panel > Add/Remove Programs.
    2. Select Microsoft SQL Server Desktop Engine (MSDE_VC).
    3. Click Remove.

    When you have removed the MSDE_VC database instance, repeat the VirtualCenter serveinstallationon.

    If you are not able to successfully remove the MSDE_VC database instance using Add/Remove Programs, delete the VCDB.mdf and VCDB.ldf files under <MSDE Install Folder>\MSSQL$MSDE_VC\Data\.

  • ESX Server 3.0 System, with More than 128 LUNs Attached, Fails to Install

    Description: During CD install, system fails and install process exits abnormally. Error messages similar to the following will be displayed on the console:

    Waiting for the X Server to start... log located in /tmp/X.log
    1...2...3...4...5... X server started successfully.
    Traceback (most recent call last):

    File "/usr/bin/anaconda", line 1016, in ?
      iutil.makeDriveDeviceNodes()
    File "/usr/lib/anaconda/iutil.py", line 415, in makeDriveDeviceNodes
      isys.makeDevInode (drive, "/dev/%s" % (drive,))
    File "/usr/lib/anaconda/isys.py", line 372, in makeDevInode
      _isys.mkdevinode(name,fn)

    Workaround: Before installation starts, temporarily remove the access to more than 128 LUNs by disconnecting any fibre channel cable(s) from the HBA(s) that might lead to the storage or exclude the storage target from the active fabric zone.

    After the system completely boots up, re-enable the connection to the storage.

  • ESX Server Installer is Unable to See Disks if Both SAS and SCSI Controllers are Used

    Description: During installation of ESX Server 3.0, certain disks might not be visible as installation targets if you use both SAS and SCSI LSI-based controllers.

    The problem is caused by the fact that the two ESX Server disk controllers drivers, mptscsi and mptscsi_2xx, cannot run simultaneously. The mptscsi driver is the base driver that supports a wide variety of mptscsi family devices. Mptscsi_2xx is present in this release to support only a small number of newer LSI SAS devices. Because these two drivers cannot coexist, it is not possible to have both older LSI SCSI cards and newer LSI SAS I/O controllers in the same server.

    You might encounter this problem under the following circumstances:

    1. You are installing ESX Server on new server hardware that is not generally available to the broad market. This new server hardware uses one of the following LSI SAS on-board controllers. At the same time, you are attempting to insert an older LSI-based SCSI card into your server box.

      LSI Logic|MPT Fusion SAS1064 PCI-X
      LSI Logic|MPT Fusion SAS1068 PCI-X
      LSI Logic|MPT Fusion SAS1064 PCI-E
      LSI Logic|MPT Fusion SAS1068 PCI-E
      LSI Logic|MPT Fusion SAS1066E PCI-E
      LSI Logic|MPT Fusion SAS1064A
      LSI Logic|MPT Fusion SAS1066
      LSI Logic|MPT Fusion SAS1078

    2. You are installing ESX Server on an older server that has a built-in LSI SCSI chipset. In addition, you are plugging in a newer SAS card that uses one of the following LSI chips:

      LSI Logic|MPT Fusion SAS1064 PCI-X
      LSI Logic|MPT Fusion SAS1068 PCI-X
      LSI Logic|MPT Fusion SAS1064 PCI-E
      LSI Logic|MPT Fusion SAS1068 PCI-E
      LSI Logic|MPT Fusion SAS1066E PCI-E
      LSI Logic|MPT Fusion SAS1078

    3. You are plugging into your server an SCSI card and SAS card, both of which use the LSI chipsets.

      Workaround: None

Licensing

  • Some Pre-Checked Items Cannot Be Checked Back to License Pool

    Description: Using the VI Client to change ESX Server Edition back to unlicensed might not automatically release feature add-ons (such as VMotion, vSMP, and Consolidated Backup) also enabled.

    You can use host-based and server-based licensing in the same environment. However, the default global VirtualCenter licensing options must be changed. Otherwise VirtualCenter Server overrides individual host settings to match its own setting.

    Default global VirtualCenter settings cause VirtualCenter to override ESX Server unserved license file settings when VirtualCenter restarts or hosts are re-added to VirtualCenter.

    Workaround: Use the Licensed Features > Add-Ons > Edit menu to uncheck and release feature add-ons.

  • License Viewer might Display Incorrect Information After License Server Becomes Unreachable

    Description: When the license server becomes unreachable, the license viewer waits two heartbeats (one heartbeat is five minutes) before refreshing the licensing information displayed. This means that incorrect information might be displayed even after the license server becomes available again.

    Workaround: When the license server becomes available again, wait more than ten minutes before checking licensing information or configuring licensing. Until that time, licensing information will be invalid.

Upgrade and Security

Upgrading

  • ESX Server 2.0.1 Hosts Not Supported for this RC Release

    Description: ESX Server version 2.0.1 host connections and host upgrades are not supported in this beta version.

    Workaround: Upgrade ESX Server 2.0.1 to ESX Server 2.5.x. Then upgrade to this release.

  • Template upgrades do not work if the host is a ESX Server 2.x.

    Description: none

    Workaround: To upgrade ESX Server 2.x templates:

    1. Upgrade the ESX Server to version 3.0.
    2. Upgrade the vmfs volume to vmfs3 where the template resides.
    3. Upgrade the legacy template.

  • Virtual machines might fail to power-on after a VMFS upgrade

    Description: After VMFS volume is upgrade successfully, powering on a virtual machinemight generate the following error message: "Attempt to power on a virtual machine with the .vmx file not stored on a NAS or VMFS version 3 datastore. The virtual machine files must be relocated or VMFS upgraded."

    Workaround: Some of the virtual machine's .vmdk and .vmx files might need to be relocated to the VMFS volumd by using the "Relocate virtual machinefiles" operation in the VI Client.

  • Installer crashes during RPM transaction while upgrading ESX Server 2.x to 3.0

    Description: The "Security" property on the Host System is set to Low and this exposes a problem in Redhat common code (rpmlib).

    Workaround: Prior to upgrade use the MUI to set the "Security" property to High.

  • After Upgrading my Virtual Machine, an Older Directory with a .HLOG File Remains

    Description: When upgrading from an older ESX Server version to ESX Server 3.0, virtual machines that are upgraded move from the console file system to the vmfs file system. In this process, it is possible that the old virtual machine directory is not fully deleted and left with a single file with the extension "hlog"

    The upgrade process does not detect and delete old vmotion log files correctly and this prevents old virtual machine directory from being deleted.

    Workaround: Manually delete leftover directories. These files do not need to be saved.

  • Legacy Template Fails to Create Virtual Machine with Duplicate Name

    Description: If there is already a virtual machine with the same name as the legacy template, the template upgrade will fail with an error. The error message reads "name already exists".

    Workaround: Rename the existing virtual machine to a different name and then proceed with the legacy template upgrade.

  • ESX Server Host Unable to Power On Virtual Machines with Redo Logs After Upgrade

    Description: Redo logs prevent virtual machines from powering on after an apparently successful upgrade.

    Workarounds: Any virtual machines with redo logs (that is, with disks in undoable or append mode) must have their redo logs committed and must be in persistent virtual disk mode. For undoable disks, this means that you must close down the virtual machine and select Commit Changes. For append mode disks, this means that you must close down the virtual machine and select >xxxx yyyy.

  • Real-time Statistics Data Has Gaps for the Host in VirtualCenter 1.x After Being Managed by VirtualCenter 2.0

    Description: If a managed host is only disconnected (not removed) from VirtualCenter 2.0, then added to VirtualCenter 1.x, a process collecting statistics for VirtualCenter 2.0 competes with one from VirtualCenter 1.x. This results in gaps in the host statistics on VirtualCenter 1.x.

    Workaround:If you wants to revert from VirtualCenter 2.0 to VirtualCenter 1.x management of an ESX Server 2.x host, then Remove (not just disconnect) the host from the VirtualCenter 2.0 inventory before adding it to VirtualCenter 1.x inventory.

  • Beta 2 Virtual Machines Using Raw Device Mapping (RDM) Might need to be Re-Created

    Description: If your virtual machine's from Beta 2 used RDMs, then please note the following:

    Check and possibly recreate each of the RDM virtual disk files located in your virtual machine's subdirectory on the VMFS or NAS datastore. The RDM files are typically identified by the string -rdm as part of their name. After the esx Server 3.0 Beta 2 system has been upgraded to run ESX Server 3.0 RC, run the command vmkfstools -q, specifying the RDM that needs to be checked.
    For example, run: vmkfstools -q /vmfs/volumes/vmfs3_vol/beta2.vmdk.

    Note: A *.vmdk file (metadata file) and a *-rdm.vmdk file are created when you create an RDM.

    If the RDM needs to be recreated, the output of this command will be a message such as "Failed to get info ..." If the command correctly displays the device mapping with messages such as:
    "Disk Id: ..." and "Maps to: ...", the RDM does not need to be created.

    Workaround: If the RDM must be recreated, remove the existing RDM and create it again, using the following commands, specifying in the second command, the raw device that should be referred to.
    rm /vmfs/volumes/vmfs3_vol/beta2.vmdk
    rm /vmfs/volumes/vmfs3_vol/beta2-rdm.vmdk
    vmkfstools -r /vmfs/devices/disks/vmhba6:0:1:0 /vmfs/volumes/vmfs3_vol/beta2.vmdk

    Note: The vmkfstools command above also creates a file automatically.

  • ESX Server 3.0 Does Not Properly Support Upgrading Multi-Boot Installations

    Description: On a system with more than one copy of ESX Server installed, or where multiple installations are visible, such as on a SAN, the first installation encountered will be upgraded.

    Workaround: While upgrading, take special care to choose the correct drive, master boot record (MBR), or SAN LUN on which to install/upgrade esx. In the case where multiple installations exist on a SAN, you can also mask off the SAN LUNs corresponding to the installations which you do not currently want to upgrade.

  • NIC Team Stops Working After ESX Server Upgrade

    Description: This situation could arise if you were running a 2.x bond in out-mac mode, and your physical switch is misconfigured with the corresponding ports as an etherchannel. This worked with ESX Server 2.x, because no check was made by default for incoming packets on the bond. In ESX Server 3.0, the default is to accept incoming packets only on the appropriate link of the team.

    Workaround: Reconfigure your physical switch to not create an etherchannel for the team's ports, or reconfigure your virtual switch to use ip-based load balancing.

  • Service Console Networking Stops Working When Using vmxnet_console and Upgrading From 2.x to 3

    Description: Upgrading vmxnet_console configurations from 2.x to 3.0 is not supported. If you configured your ESX Server 2.x system to use the vmxnet_console network driver, which is typically used to share a particular physical network adapter between virtual machines and the service console, and is activated in ESX Server 2.x by using the CLI command insmod vmxnet_console devName=<nic1,nic2,...>, you might lose your network configuration after upgrading.

    Workaround: Log into the service console and bring up networking manually, then connect to the system using the Virtual Infrastructure Client and reconfigure networking as needed.

  • VirtualCenter's Datastore Browser Does Not Display Virtual Disk Files for Virtual Machines on ESX Server 2.x Hosts

    Description: When browsing a datastore on an ESX Server 2.x host, you cannot see the virtual disk files belonging to powered-on virtual machines.

    Workaround: You can see the disk file for a powered On virtual machine on ESX Server 2.x as an ordinary file if you search for all files and folders in the datastore browser. However, you won't see it as a virtual disk, while the virtual machine is powered on.

  • In-Place, No Migration Upgrade Only for Virtual Machines with RDM Disks

    Description: Migrating of powered-on and powered-off virtual machines with raw device maps (RDMs) is currently non-functional. This means that upgrade procedures involving the use of migration to reduce virtual machine downtime are not possible when virtual machines with RDMs are involved. Only an upgrade without migration is possible.

    Workaround: Use an in-place upgrade procedure with no migration. Refer to "Upgrading a Single Host with Virtual Machines on a Local Disk" in the VMware Virtual Infrastructure Installation and Upgrade Guide.

Security

  • Problem with running 'service iptables start'

    Description: Running 'service iptables start' issue disables and/or modifies ESX Server firewall

    Workaround: All firewall configuration should be done via esxcfg-firewall. Do not use /etc/rc.d/init.d/iptables. Instead use esxcfg-firewall.

  • Nessus Generates a Warning on Port 1311/TCP

    Description: If you are using Dell OpenManage, Nessus might report the following vulnerability:

    Warning -- found on port unknown (1311/tcp)

    Your web server seems to accept unlimited requests.
    It might be vulnerable to the 'WWW infinite request' attack, which
    allows a cracker to consume all available memory on your system.

    *** Note that Nessus was unable to crash the web server
    *** so this might be a false alert.

    Solution : upgrade your software or protect it with a filtering reverse proxy

    Risk factor : High
    BID : 2465
    Nessus ID : 11084

    This problem isn't specific to esx.

    Workaround: Contact Dell for more information on this warning.

  • Nessus Generates a Warning Indicating a VERITAS Backup Exec Security Advisory for Windows Servers

    Description: If you are running a Veritas Backup Exec agent for Windows Servers, Nessus might issue the following warning about unauthorized downloading of arbitrary files:

    The remote host is running a version of VERITAS Backup Exec Agent which is configured with a default root account. An attacker might exploit this flaw to retrieve files from the remote host.

    Solution: http://seer.support.veritas.com/docs/278434.htm


    Risk factor: High / CVSS
    Base Score : 9 (AV:R/AC:L/Au:NR/C:C/A:P/I:C/B:N)
    CVE : CVE-2005-2611
    BID : 14551

    Workaround: Apply the hot fixes on the Veritas Web page provided in the Nessus warning.

  • Nessus Generates a Warning for Vulnerability at Port 9 with the Ascend Router

    Description: If you are using an Ascend router and the firewall is turned off, Nessus might issue the following warning about unauthorized downloading of arbitrary files:

    It was possible to make the remote Ascend router reboot by sending
    it a UDP packet containing special data on port 9 (discard).

    An attacker might use this flaw to make your router crash continuously,
    preventing your network from working properly.

    Solution:
    Filter the incoming UDP traffic coming to port 9. Contact Ascend
    for a solution.

    Risk factor: High
    CVE: CVE-1999-0060
    BID: 714
    Nessus ID: 10019

    Workaround: Ignore the warning. It is a false alarm.

VMware Consolidated Backup

  • Uninstalling VMware Consolidate Backup (VCB) hangs at around 90% completion

    Description: Uninstalling VCB through "Add/Remove Programs" can sometimes result in the uninstall process hanging at around 90% completion.

    Workaround: The uninstall process (msiexec)can be killed manually, using Task Manager. After killing the process (at around 90%), all the VCB files should have gotten deleted and the entry in "Add/Remove Programs" for VCB will be removed.

  • In VMware Consolidated Backup (VCB) Legato Networker IM, default recover option from networker user interface might not show multiple full virtual machines backed up in a single save set.

    Description: Using VCB Legato Networker IM, When multiple full virtual machines are backed up using the single save set, during the restore operation of the virtual machine the networker user window might show only the last full virtual machinebacked up in the set. Though all the virtual machines in the save set are backed up, since there is time difference in the backup of about 5-10 minutes the networker user interface shows only the last backed up VM

    Workaround: The user can recover the virtual machines that are not shown in the networker user interface, either by changing the browse time, or recover using save set recover option in the networker user window available under the operation menu.

  • Attempt to Perform File-Level Backup of a Virtual Machine With an Uninitialized Disk Fails.

    Description: If a virtual machine has an uninitialized disk, no file level backup of the virtual machine can be performed using VMware Consolidated Backup. An attempt to mount the virtual disks of such a virtual machine using VMware Consolidated Backup fails with the error message Error getting volumes from the disk set.

    Workaround: From within the virtual machine, initialize the disk using the Windows Disk Management tool or remove the uninitialized disk from the virtual machine.

  • Warning on Disconnect from VirtualCenter Server

    Description: When increasing the default log level on the VMware Consolidated Backup command line utility, you might see the following message when the VMware Consolidated Backup command line utility logs out of the VirtualCenter server:

    Got vmdb error -14 (Pipe connection has been broken) when invoking [logout] on [vim.SessionManager:SessionManager]

    Workaround: This error message is harmless and can be ignored.

  • Problems when Deleting VMware Consolidated Backup Snapshots while Backup is in Progress

    Description: If you try to delete the VCB backup snapshot through the VirtualCenter Client while a backup is in progress, the following will happen:

    • The snapshot operation will fail with a "Concurrent Access" error.
    • The entry for the VCB backup snapshot will be removed and not be visible in the Snapshot Manager, but it will still exist and backup will continue unaffected.
    • The snapshot manager will contain a "Consolidate Helper" snapshot.
    • When backup completes, the "Consolidate Helper" snapshot and the resources (redo logs) held on to by the VCB backup snapshot will be removed.

    Workaround: No workaround is required. Once backup completes, all resources reserved during backup will be freed.

  • Virtual Machines' Raw Disks cannot be Upgraded to RDMs by the Upgrade Script

    Description: When upgrading to ESX Server 3.0, the upgrade script does not convert virtual machines configured with the raw disk access to virtual machines configured with the raw device mapping (RDM) access.

    Workaround: If you are using raw disks, you must convert them to RDMs in ESX Server 2.5 before upgrading virtual machines to ESX Server 3.0. However, if you have already upgraded your virtual machines to ESX Server 3.0, do the following:

    1. Migrate the virtual machine that uses the raw disk back to ESX Server 2.5.
    2. Convert the virtual machine so it can use the RDM.
    3. Migrate the virtual machine to ESX Server 3.0.

  • Backing Up and Restoring Virtual Machines in ESX Server 3.0

    Description: A document provided describes how to back up and restore virtual machines on the ESX Server 3.0 Service Console command prompt. It walks you through the process of configuring the VMware Consolidated Backup Service Console command line utilities and provides an example of how to use vcbMounter and vcbRestore to back up and restore virtual machines from the Service Console command line.

    Workaround: Refer to the document: Backing Up and Restoring Virtual Machines in ESX Server 3.0

  • Restoring ESX Server 2.5.x Virtual Machines in ESX Server 3.0

    Description: A document provided describes how to restore virtual machines that have been backed up on ESX 2.5.x using Vmsnap onto ESX Server 3.0 using the Service Console version of the VMware Consolidated Backup Command Line Utilities.

    Workaround: Refer to the document: Restoring ESX Server 2.5.x Virtual Machines in ESX Server 3.0

Server Configuration

Networking

  • If vmware-hostd is restarted, the ESX Server SNMP stops working

    Description: ESX Server SNMP module maintains a live connection to vmware-hostd. When vmware-vmware is restarted, either by vmware-watchdog automatically, when vmware-hostd crashed or was manually killed; or by users manually, the connection is broken and further using of the broken connection would return errors only.

    Workaround: Restart SNMPD by calling service snmpd restart.

  • Configuring a Duplicate IP Address for VMotion Causes an Error

    Description: Configuring VMotion creates an IP address error.

    Workaround: A host should not be configured for migration with VMotion if another host with the same IP address is already configured for migration with VMotion. The same rule applies here as with any network: that is, no two network entities of any type should ever share the same IP address if they are on the same network.

    Duplicate IP addresses between migrating hosts is not supported. Make sure that all the hosts you are going to use for migration with VMotion have unique IP addresses. Do this prior to configuring a host for migration with VMotion.

  • ESX Server supports up to 4096 network ports on it's vswitchs, but not all are available to user.

    Description: ESX Server reserves 128 ports up front for internal use. Beyond that 8 ports of every vswitch are reserved by the system when the vswitch is created. The 8 reserved ports are accounted for by VC Client in the values it displays (i.e. it displays 56 for a 64 port vswitch, and 120 for a 128 port vswitch, etc.).

    Basically, the total ports available for virtual machines are 4096 - (128 + (numVswitches * 8)).

    Workaround: n/a

  • Service Console Can Lose Network Connectivity if not Configured Properly

    Description: When Service Console networking is misconfigured, the Service Console loses network connectivity, and as a result, the ESX Server cannot be managed by the host client or Virtual Center Client. Someone has to access the physical console or serial line like iLo to bring networking back.

    The Service Console can lose network connectivity if it is not configured properly; for example, if you delete the last working physical uplink from the vSwitch having the Service Console virtual NIC, if you delete the Service Console virtual NIC itself, if you set the Service Console with an invalid IP or gateway, and so on.

    Workaround: You must be careful when configuring the Service Console virtual NIC or its parent virtual switch property that can affect the Service Console virtual NIC connectivity, for example, the uplink. If possible, before updating the Service Console virtual NIC, you can create another independent working Service Console NIC so that, in the event the configuration brings the console NIC down, you can fall back to the second Service Console NIC and fix the configuration.

    General warning: It is possible to change Service Console network settings through VI Client in such a way that all network connectivity to the host is lost. You must make sure that you have direct console access to the host before doing so.

  • ESX Server 2.0.x Virtual Machine Displays an Error Message about Deprecated Ethernet at Power-on

    Description: After upgrading your VMFS and relocating the VMX configuration files, when you attempt to power on a virtual machine, you can see an error message: "Virtual Machine Message: msg.ethernet.configFailed: Deprecated ethernet0.devName found. Please use ethernet0.networkName insteadFailed to configure ethernet0".

    Workaround:

    1. From the service console edit the VMX file, found on this path:
      /vmfs/volumes/<datastore>/<virtual machine name>/<virtual_machine name>.vmx
    2. Delete all lines that start with the word Ethernet.

  • The NIC Stops Transmitting When Using Broadcom 5700 Rev 14 and 5701 Rev 15 for Heavy Traffic

    Description: Under heavy traffic Broadcom 5700 Rev 14 and 5701 Rev 15 might stop transmitting data. This can generally have when using VMotion or transmitting lot of data in Service Console.

    Workaround: Choose another type of NIC. Do not use the Broadcom 5700 Rev 14 and 5701 Rev 15 for heavy traffic, VMotion, or Service Console on ESX Server Server.

  • Adding a New Network Adapter Can Cause Loss of Service Console Connectivity Using the VI Client

    Description: Adding a new network adapter, in certain cases, can cause loss of service console connectivity and manageability using the VI Client due to network adapters getting renamed.

    Workaround: Reconfigure networking through the service console.

    1. Log in directly to your esx's console.
    2. Use the command esxcfg-nics -l to see what names have been assigned to your network adapters.
    3. Use the command esxcfg-vswitch -l to see which vSwitches, if any, are now associated with device names no longer shown by esxcfg-nics.
    4. Use the command esxcfg-vswitch -U <old_vmnic_name> <vswitch>
    5. to remove any network adapters that have been renamed.
    6. Use the command esxcfg-vswitch -L <new_vmnic_name> <vswitch> to re-add the network adapters, giving them the correct names.

General Storage

  • VI Client Does Not Display Progress of a VMFS During File System Upgrade

    Description: You cannot monitor the progress of a VMFS upgrade in progress. There is no progress meter.

    Workaround: Open a console and click alt-F12 to see the vmkernel realtime log.

    Note that this console is not the service console (Alt-F1) or the textual welcome to ESX Server 3.0 screen (Alt-F11).

  • Sometimes ESX Server Host Unable to Power On Virtual Machines on Upgraded VMFS3 Datastore

    Description: When you convert a VMFS2 disk to VMFS3, there is a storage cache issue which causes the relocate to fail immediately after the VMFS upgrade. Specifically, you need to manually relocate configuration files (VMX files) after converting a VMFS2 disk volume to VMFS3.

    Workaround: To resolve this error message, select your ESX Server host in the left-hand inventory display, then select from the right-click menu: Relocate virtual machine files.

  • NEW 5/31/06 Do Not Create LUNs Larger Than 1 TB

    Description: None.

    Workaround: This limitation only applies to ESX Server 3.0 and VirtualCenter 2.0 Release Candidate. The solution will be to upgrade to the official released version of the products.

  • Upgrading my VMFS-2 Volume Failed Due to Insufficient Free Space

    Description: If a VMFS-2 volume contains insufficient free space (at least 1200 MB), an attempt to upgrade it fails. In such a case, the VI CLient displays an informative error message and Insufficient free space on volume X is printed in the VMKernel log.

    Workaround: To proceed, space must be freed up on the volume. This can be done via the following steps:

    1. Log into the Service Console via SSH or local console.
    2. Unload the vmfs2 and vmfs3 modules using vmkload_mod -u vmfs2 and vmkload_mod -u vmfs3.
    3. Load the fsaux module in VMFS-2 unlink mode using vmkload_mod fsaux fsauxFunction=fs2unlink.
    4. Move or remove files on the volume such that at least 1200MB becomes available on it. The rm and mv commands might be used.
    5. Wait 10 seconds, then unload the fsaux module using vmkload_mod -u fsaux.
    6. Reload the vmfs2 and vmfs3 modules using vmkload_mod vmfs2 and vmkload_mod vmfs3.
    7. Retry the upgrade.

  • VMware ESX Server RC 2 does not support the hot-adding of SCSI disks to local SCSI HBAs

    Description: SCSI disks that are connected to a local SCSI HBA after the ESX Server has been booted will not be recognized by the esx. Rescanning the local SCSI HBAs will not cause the newly connected disks to by detected by the server.

    Workaround: The ESX Server must be rebooted in order to detect SCSI disks newly connected to local SCSI HBAs.

  • Error When Using Internal 2 channel Adaptec SCSI Cards Without Identical Configurations

    Description: Internal SCSI RAID cards that require the aacraid driver and that contain two channels need to have those channels configured identically in the BIOS. Otherwise the system will not boot off of the local drives.

    Note: No core dump file is available because the dump partition is not available when the system encounters the problem.

    Workaround: Either set both channels to SCSI or set both channels to RAID.

  • Legacy templates will fail after upgrading the shared lun to vmfs3 from another host.

    Description: When user has got both esx3.0 and esx2.x with shared vmfs2 lun and user upgrades the vmfs2 shared lun to vmfs3, then legacy template upgrade fails.

    Workaround: There are two possible workaround for this issue

    • Upgrade the legacy templates before upgrading the shared lun.
    • Upgrade the ESX Server 2.x host to ESX Server 3.0 after upgrading the shared lun.

  • Doing multiple vmfs upgrades at the same time just upgrades the first one other vmfs upgrade fails.

    Description: Multiple concurrent vmfs do not work.

    Workaround: Upgrade your vmfs one at a time.

  • Additional VI Clients are unable to connect to an ESX Server during a vmfs2 to vmfs3 upgrade

    Description: After initiating an upgrade from vmfs2 to vmfs3 through the VI Client, additional VI Client connections will be unable to connect directly to the esxs accessing the datastores being upgraded.

    Workaround: Wait several minutes, until the vmfs2 to vmfs3 upgrade process is complete before initiating new VI Client connections directly to an esx.

  • A vmfs partition is present on a disk but there is no datastore corresponding to the partition

    Description: If a vmfs partition has not been formatted properlyrecognizedot be recongnized as a ddoes note. The UI doesnot allow for deleting such partitions either. While using the "Add Storage" Wizard to add a VMFS volume/ Diagnostic partition, it will not recognize this portion as free space.

    Workaround: Log on to the ESX Server console and use fdisk to manually delete the partition.

  • Create datastore operation might fail if the disk also contains the esx service console file system

    Description: Trying to create a datastore on a disk that contains the Service Console file system might fail to create a datastore with an error indicating an inability to Update disk partitions. The problem is that if the Service Console is using the disk for its root partition, it might prevent modifications to the partition table.

    Workaround: The easiest workaround is to use another disk for a datastore. If this is not possible, it is possible to manually create the VMFS datastore. To do this, use fdisk on the /dev/sd[a-z]* device. Create a partition with type 0xfb. Format the VMFS datastore using

    vmkfstools -C vmfs3 vmhbaI:T:L:P

    where I corresponds to the initiator of the vmhba, T corresponds to the target number of the disk, L is the LUN number, and P is the partition number of the newly created partition.

  • Problems when Using Emulex Fibre Channel Cards

    Description: In multipathed configurations that use Emulex Fibre Channel cards, extreme failover testing conditions can trigger an ESX Server host failure. This issue is encountered after many hours of heavy failover testing and is unlikely to be encountered under normal use. There are no known data integrity issues to be triggered by this problem.

    When this failure occurs, the following information is displayed and also appears in your VMkernel log file (/var/log/vmkernel):

    Exception type 13 in world 1148:mks:Win2003N @ 0x9497bb:0x35f3a18:[0x9497bb]
    (lpfc_flush_disc_evtq+0x163(0x3d8c7030, 0x3fc19be0, 0x2)0x35f3a68:[0x950691]
    lpfc_linkdown+0xbd(0x3d8c7030, 0xdea,

    Exception type 14 in world 1085:vmm3:win2k3e @ 0x7e106f:0x34f7bc4:
    [0x7e106f]lpfcdd_7xx+0x406f(0x808d008, 0x808d100, 0x808bd98)
    0x34f7bf4:[0x7df0a1]lpfcdd_7xx+0x20a1(0x808d008, 0x808d100, 0x808bd98)

    Exception type 14 in world 1081:mks:NW65-SP5 @ 0x0:0x34e7bd8:[0x0](0x3d092510,
    0x5f6c33b8, 0x3425)0x34e7c20:[0x62e856]Timer_BHHandler+0x10e(0x0, 0x6f8408, 0x2000001)

    Workaround: None.

  • Browsing Extremely Large Datastore's Leads to VI Client Failure

    Description: If you are using the VI Client datastore browser on very large directories (containing several thousand of sub-directories and files), the system might run out of memory and crash.

    Workaround: None.

  • Virtual Machines become unavailable when we remove an inactive NAS datastore

    Description: All virtual machines registered with a host become unavailable if we remove an inactive/unavailable nas datastore.

    Workaround: From the command line run service mgmt-vmware restart.

  • No protection against deleting a disk backing file that is also used by another virtual machine, if the other virtual machine is not powered on

    Description: In VC 1.x, removing a disk from a virtual machine unconditionally left the disk file on the datastore. With VirtualCenter 2.0.0 we have added the option to delete the backing file when removing a disk from a virtual machine. If you choose to delete the file but it is in use by a powered-on virtual machine, an error will occur and the disk will not be deleted from the datastore. But if the only other users of the disk file are not currently powered on, the deletion will succeed and those other virtual machines will no longer function.

    Note: This problem is most likely to come up with RDMs, where the LUN is intended only to be used by one virtual machine at a time but multiple virtual machines do use it at different times. Luckily, in the RDM case it is also possible to recover from this problem.

    Workaround: If the deleted disk is an RDM and the target LUN hasn't been destroyed in the meantime, simply creating a new RDM for the same LUN will allow the other virtual machines to function again. Obviously this won't work for a regular virtual disk. In any case, the best protection is to be careful when choosing to delete the disk completely rather than remove it from the virtual machine but leave the backing file on the datastore.

NAS Storage

  • Disabling the Qlogic PCI Device in BIOS is not recognized by the ESX Server properly, this results in a PSOD

    Description: During fresh installation, one would see a PSOD if prior to installation one has disabled the Qlogic PCI module in the BIOS (by disabling the interrupt setting).

    This problem as described above is specific to the HP blade (in this case it was observed on HP BL30P G1) due to the fact that ESX Server does not properly respect PCI command bits used by HP BIOS to disable QLogic, and corrupt text region resulting into a PSOD.

    Workaround: Disable the booting from SAN feature by disabling the Host Adapter and the bootable setting inside the Qlogic BIOS instead of disabling the PCI device in the System BIOS.

  • NFS mounts for ESX Server 3.0 cannot be configured with root squash

    Description: ESX Server 3.0 has some processes that run as root by default. Consequently, when accessing NFS mounts from ESX Server 3.0, the mounts must be configured without the root squash option.

    Workaround: Either:

    • Disable root squash for NFS mounts mounted by ESX Server 3.0
    • There is an experimental feature "delegate user" which will run the processes in question as a non-root account which will allow the NFS mount to be configured with the root squash option. Running experimental features is not recommended for production use.

    Note: You don't have to disable root squash for the entire volume, they just have to trust root from the esx. This can be achieved by adding the IP address of the vmknic to the list of root trusted servers.

  • Cold Migrating a Virtual Machine from ESX Server 2.5.3 or ESX Server 2.5.2 to ESX Server 3.0 Might Add New Hardware

    Description: In some configurations, performing cold migration from ESX Server 2.5.3 or ESX Server 2.5.2 to ESX Server 3.0 might cause a virtualized USB controller to be added to the virtual machine.

    Workaround: Manually disable USB after migrating the virtual machine. Please see the document "Introduction to Virtual Infrastructure"

  • Error Appears When Adding an NFS Datastore

    Description: When adding an NFS datastore, the error message A specified parameter was not correct appears.

    Workaround: Set the gateway before mounting an NFS server.

    1. Log into the VMware VI Client and select the server from the inventory panel.
    2. Click the Configuration tab, and click DNS and Routing.
    3. Click Properties.
    4. Click the Routing tab.
    5. Enter the default gateway under VMkernel.

  • NFS Mounts Are Restricted to 8 by Default

    Description: Default configuration allows for only eight NFS mounts per esx.

    Workaround:
    To mount more than eight NFS mounts on an ESX Server host:

    1. Start the VI Client.
    2. Select the host from the inventory panel and click Advanced Settings on the Configuration tab.
    3. In the Advanced Settings dialog box, select NFS and set NFS.MaxVolumes to 32.
    4. Select Net and set Net.TcplpHeapSize to 30.
    5. Reboot the ESX Server host.

    Note: These settings enable up to 32 mounts on the ESX Server host.

iSCSI Storage

  • H/W iSCSI: ESX Server cannot be Installed on LUN 255

    Description: ESX Server might encounter an anaconda dump (exception) when installing on H/W iSCSI LUN ID 255.

    Workaround: Use a different LUN ID for the boot LUN.

  • iSCSI CHAP Authentication is Supported on a Per-Initiator (HBA) Basis

    Description: When using iSCSI CHAP with esx, customers can create unique name/secret pairs for each HBA on an ESX Server system that accesses their storage array. The name/secret pair is then set on each HBA port on the given ESX Server system. Per target (storage array) authentication on the ESX Server HBA is not supported. In installations with multiple storage arrays, the name/secret for the HBA on the ESX Server system must be the same on all arrays the system accesses. Bi-directional CHAP is not supported.

  • CHAP Secrets are Not Displayed

    Description: Upon returning to the CHAP configuration screen after setting the CHAP secret, the secret field displays blank rather than asterisks.

    Workaround: The secret is set and the issue is only cosmetic.

  • Hardware iSCSI Support is Experimental

    Description: Due to a performance issue that arises from I/O contention from multiple esxs, hardware initiated iSCSI is supported only experimentally in ESX Server 3.0. We understand the nature of the problem and have been working with appropriate iSCSI vendors to resolve the issue so that we can provide full and complete support as early as possible after ESX Server 3.0 release. We expect that all ESX Server and VirtualCenter functionality such as multipathing, VMotion, an so on will work correctly with hardware initiated iSCSI and evaluations of the technology should complete successfully with the exception of load testing.

    Any ESX Server issue outside the realm of hardware initiated iSCSI is still covered by your technical support contract. Using hardware initiated iSCSI does not invalidate support on the ESX Server as a whole. It is just the case that we cannot guarantee solutions to hardware initiated iSCSI related issues at this time. Please see the Storage Compatibility List for a list of arrays and adapters that work in ESX Server 3.0.

    Note: Software initiated iSCSI is fully supported in ESX Server 3.0 and is the recommended configuration for production iSCSI deployments with esx.

  • Problems with Configuring qla4010 Card when ESX Server Boots from iSCSI

    Description: When ESX Server boots from iSCSI, using the VI Client to set or change iSCSI HBA parameters might cause ESX Server to loose connection to the iSCSI storage device.

    Workaround: Do not use the VI Client to set or modify such parameters as IP address, iSCSI name, discovery information, or CHAP credentials for the iSCSI HBA that ESX Server uses to connect to iSCSI storage. Instead, set these parameters through the QLogic BIOS Fast!UTIL.

  • System Time Accounting Incorrect for Software iSCSI

    Description: Resource management settings (reservation, limit, and shares) for virtual machines using software iSCSI might not be completely effective. Some virtual machines might get more or less CPU time than is set for them. If virtual machines using software iSCSI participate in a DRS cluster, accurate DRS resource allocations, recommendations, and migrations might be impacted.

    The reason is that CPU time consumed processing iSCSI I/O on behalf of a virtual machine is not charged correctly to the virtual machine.

    Workaround: None.

VI Client and Web Access

Web Access

  • Intermittent SSL Warnings Appear During Logout

    Description: When you use VI Web Access in Internet Explorer to connect to an ESX Server that is configured for HTTPS support and has been added to your list of local intranet sites, you might occasionally see the warning "This page is accessing information that is not under its control. This poses a security risk. Do you want to continue?" during logout.

    Workaround: Click Okay. The warning might reappear, but it will not impact your ability to continue using VI Web Access.

VI Client and Interface

  • There are several general known issues with the Virtual Infrastructure Client interface.
    • These issues include:
    • Virtual Machine Names with Special Characters are Not Supported.
    • The Revert to Snapshot Option Might be Enabled Even When No Snapshots Exists.
    • VirtualCenter Does Not Produce an Exception When a Snapshot is Taken Under Conditions Not Supported by the Host.

  • Inaccessible virtual machines will be named "Unknown VM"

    Description: When an ESX Server is rebooted or HostAgent is restarted it needs to reload the Host Agent configuration of each registered virtual machines. If the vmx file is inaccessible, it will be unable to read the configured name of the virtual machine, and it will be default to the "Unknown VM" name. This is only a problem during restarts. Temporarily loosing access to storage does not cause virtual machine's names to set to "Unknown VM"

    Workaround: The work around is to rename the virtual machines that has gotten into this state, once they become available again.

  • Receive "A Specified Parameter Was not Correct" Error in the VI Client Interface When a Virtual Machine is Powered On

    Description: This occurs because you have low disk space on your datastore where the virtual machine is located. During power on, enough space must be available to store the swap file associated with the virtual machine. Typically, this is equal in size to the amount of memory allocated to the virtual machine. A large-memory virtual machine, that is, a virtual machine allocated with 4 GB of RAM, requires 4 GB of free disk space on its datastore in order to power on.

    Workaround: Free up enough disk space on the datastore and try the power on operation again.

  • Virtual Machine Fails to Power on Due to a Lack of CPU or Memory Resources

    Description: When a virtual machine fails to power on due to a lack of CPU or memory resources, the user will receive a specific error message describing the failure; for example, "Admission check failed for cpu resource". This specific message is also shown in the Status column of the Recent Tasks view. However, the Events tab for the virtual machine will show a more generic message: "A general system error occurred."

    Workaround: None.

  • Error message in VC indicating that esx.conf.WRITELOCK could not be obtained

    Description: This generally happens when a user tries to do host configuration. The esx.conf file has been locked for more than 10 sec by the initrd regen process causing other processes to fail out until the process is done.

    Workaround: Wait 30 sec and try operation again.

  • Not Able to Login with VI Client

    Description: This is an environment problem. The user that runs the VI Client might not have access permission to the authentication user.

    Workaround:

    1. In your Windows system, bring up your control panel (Administrative tools).
    2. Start Local Security Settings.
    3. Click Local Policies.
    4. Select User Rights Assignment.
    5. Open the Act as part of the operating system.
    6. Set either your current login or the Administrators group as having this authority (assuming that your current login is a local administrator).
    7. Reboot your system.
    At this point you should be able to log in with no problems.

  • Incorrect Input Text in Configuration Wizards Can Cause Unhandled Exceptions

    Description: Many of the free-form input text boxes in the configuration wizards have only limited validation of user input and might generate "Input string was not correct format" exceptions.

    Workaround: Please review the documentation for the wizard you’re using to find out what the limitations are on the individual field inputs. Then avoid entering non-valid inputs, such as names with special characters such as "?" and invalid IP addresses such as "255.256.888.999."

  • "Export Diagnostics" Option in the VI Client Might Fail to Resolve Hostnames on Other Domains

    Description: The "Export Diagnostics" option in the VI Client allows logs and other troubleshooting data to be downloaded from a managed esx. The URL used to download the diagnostic bundle, however, currently uses unqualified hostnames that prevent a VI Client from downloading successfully when the hostname cannot be resolved into a fully qualified hostname (that is, hostname.domain.com). This typically occurs when the VI Client is being run in a domain different than the domain for the VirtualCenter Server or ESX Server being managed.

    Workaround: Try running the VI Client from the same domain as the server, or access and download the needed logs manually.

  • VI Client Fails to Add a Host

    Description: VI Client times out while trying to add an esx host. This might occur if a previously configured NFS mount in the service console is not available at the time of the add host operation.

    Workaround: If the service console on a host has any NFS mounts, check the availability of the mounts before adding the host to VirtualCenter. This can be done on the host machine by logging into the service console or by checking your NFS servers. If you cannot reach a mount, unmount it or comment it out from the /etc/fstab file, then reboot the host.

Virtual Machine Management

Virtual Machine Configuration

  • Using VMware-tools installation of Virtual Infrastructure Client do not force CDROM eject inside Solaris Guest

    Description: Using VMware-tools installation of Virtual Infrastructure Client do not force CDROM eject inside Solaris Guest. short description

    Using Virtual infrastructure client click Solaris virtual machine-> install VMware-tools. If the Solaris Guest has already mounted CDROM from previous instance then "install VMware-tools do not force the unmount/eject of the existing cdrom workaround; if any

    Workaround: Manually unmount the cdrom from within the guest or use mount/unmount options from the solaris guest

  • Adding Disk to Virtual Machine Running a Snapshot Causes a Non-Unique Disk Name

    Description: If a virtual machineis currenly running off a snapshot and the user then adds (or hotadds) a disk to this VM, choosing to store the disk on a datastore different from the one on which the virtual machine's working directory resides, then, the disk that is added will not have a unique disk name. Once in this situation, if the user attempts to delete snapshot(s) on the virtual machinewhile it is powered on, the operation will fail with a FileLocked error and the user will now see multiple "Consolidate Helper" snapshots in the virtual machine's snapshot manager. Once this situation is hit, the user must now power off the virtual machineand then delete the "Consolidate helper" snapshots to return the virtual machineto its desired state.

    Workaround: Perform the following:

    • If a virtual machineis running off a snapshot, then it is recommended that any disks added to the virtual machineare added to the same datastore on which the virtual machine's working directory resides.
    • If a virtual machineis running off a snapshot and the user must add a disk on a different datastore, then it is recommended that the user consolidate all the snapshots of the virtual machineprior to adding the disk, if this is feasible.
    • Once the user has added a disk on a different datastore to a virtual machinerunning off a snapshot, it is recommended that the user poweroff the virtual machinebefore deleting it's snapshots. While this approach is not guarranted to work for every case, atleast the virtual machinewill not be left dangling with "Consolidate Helper" snapshots.
    • If the user does run into this situation, it is recommended that the virtual machinebe powered off before attempting to delete the "Consolidate helper" snapshots.

      Note: In some cases, the virtual machine's disk state might remain inconsistent despite deleting the above helper snapshots and as a result, the user might run into problems during subsequent operations on the VM.

  • Creating a virtual machine using an existing disk from a legacy virtual machine can result in anomalies

    Description: The New virtual machinewizard is intended to create a modern virtual machine, and it presents choices based on that intention. But adding a legacy disk to a virtual machine "downgrades" it to a legacy virtual machine, so in this situation the actual valid ranges for things like memory settings will be different than the UI allowed. Because upgrading the virtual disk would make it unusable by legacy virtual machines, we do not perform this upgrade automatically when the disk is added to the virtual machine.

    Best practice: If you create a virtual machine with an existing legacy disk, "upgrade virtual hardware" on the new virtual machine immediately after the creation task completes. NOTE that doing this will permanently upgrade the virtual disk file, and it will no longer be usable on legacy virtual machines.

    Workaround: If you do require the disk to still be usable by legacy virtual machines, you must not use any of the new features of modern virtual machines, nor exceed the memory allocation limits that apply to a legacy virtual machine. After creating a virtual machine with a legacy disk, edit the properties (settings) of the virtual machine and make sure the memory setting is still in the allowed range. Also, since ESX Server 3.0 does not support adding or removing devices on legacy virtual machines, you might need to create the virtual machine with a temporary placeholder disk (1 MB is fine, you don't ever intend to power on with this disk), reconfigure the hardware as desired, and then remove the placeholder disk and add the real existing disk in a single reconfiguration operation.

  • Adding a legacy virtual disk to a virtual machine "downgrades" it to a legacy virtual machine, and can result in problems

    Description: Because upgrading the disk to work in a modern virtual machine would make it no longer work in legacy virtual machines, VirtualCenter does not perform the upgrade automatically. But if the virtual machine was already using features only available in modern virtual machines, including for example client-side devices, virtualization of arbitrary host SCSI devices, and higher maximum virtual machine memory limits, this will result in an invalid legacy virtual machine. Also, because adding and removing virtual devices is not supported on legacy virtual machines, the virtual machine can't be "fixed" by removing the legacy disk.

    Best Practice: Immediately after the reconfiguration task that adds a legacy virtual disk to a virtual machine completes, upgrade the virtual hardware of the virtual machine. NOTE that this will change that legacy disk and it will no longer be usable by legacy virtual machines.

    Workaround: If you need to keep your legacy virtual disk usable by legacy virtual machines, you will not be able to use it in a modern virtual machine. Since virtual device addition and removal are not supported for legacy virtual machines on ESX Server 3.0, you will need to make all of your other virtual hardware changes before adding the legacy disk. Immediately after the task to add the legacy disk completes, edit the properties (settings) of the virtual machine and click on every entry on the Hardware tab to verify that the virtual hardware is consistent with "downgrade" to a legacy virtual machine. What to do if you read this too late: If you are already in a catch-22 because you added a disk that you do not wish to upgrade, to a virtual machine that can't function as a legacy virtual machine, there is no GUI approach to fix this problem. It is probably easiest just to recreate the virtual machine without the legacy disk.

  • When Attempting to Mount an ISO Image on a VirtualCenter 2.0.0 Managed ESX Server 2.x Host, I Cannot See the /vmimages Datastores Directory

    Description: ESX Server 3.0 system automatically creates the /vmimages directory. ESX Server 2.x systems do not create a /vmimages directory.

    Even if vpxa did create a /vmimages directory on ESX Server 2.x hosts, there wouldn't be any ISOs in that directory. You have to manually copy over any ISOs that you need.

    So you are allowed to specify an alternate ISO directory in vpxa.cfg. If your ISOs are stored in, for example, /root/ISO, you can edit the vmImages tag in vpxa.cfg, and the datastore browser reads that directory when you are looking for ISO images.

    Workaround:

    1. Manually create a vmimages diretory in the / directory.
    2. Mount the ISO image or add a link to the mounted ISO images.

  • Solaris10U1 Virtual Machines Might Not Properly Discover/Configure the Virtual Floppy Device, Virtual Parallel Port, and/or Virtual Serial Port

    Description: After adding a virtual floppy device, a virtual parallel port, and/or a virtual serial device, a Solaris10U1 guest might not properly initialize the devices. For example, a host operating with Solaris10U1 would normally have a floppy disk auto mount to /floppy when the disk is inserted into the floppy drive. On ESX Server 3.0 with Solaris10U1, this might not happen.

    Workaround: As a workaround, you might try the following:

    In a terminal on the Solaris10U1 virtual machine, enter the following commands:
    # eeprom acpi-user-options=0x0
    # touch /reconfigure
    # reboot

    These commands should cause Solaris10U1 to recognize and configure your virtual devices properly. To verify, you might do the following:

    floppy:

    1. insert a floppy disk into the ESX Server host
    2. using the UI, make sure the floppy is 'connected'
    3. in the Solaris10U1 vm, issue the following command
      # volcheck -v

    This should mount /floppy (/dev/diskette) to your floppy disk

    parallel:

    1. add a virtual parallel port to your vm, and choose to output to a file on the ESX Server host
    2. using the UI, make sure the parallel port is 'connected'
    3. in the Solaris10U1 vm, issue the following commands
      # lpadmin -p tofile -v /dev/printers/0
      # lpadmin -p tofile -l

    This should show you the properties associated with the printer. Please refer to Solaris documentation for additional configuration.

    serial:

    1. add a virtual serial port to your vm, and choose to output to a file on the esx host
    2. using the UI, make sure that the serial port is 'connected'
    3. in the Solaris10U1 vm, issue the following commands
      # echo "my serial port is now working" > /dev/term/a
    4. on the ESX Server host, view the serial output file to see that the message has been posted.

    Note: This method also does not work for serial connections through pipes.

  • When using serial ports that are connected to named pipes, relative paths and locations on vmfs cannot be used

    Description: The serial port editor allows a user to browse datastores to select a location for their named pipes to be placed and allows a user to type a relative path in the text dialog for named pipes. When typing a relative path in the text dialog, this implicitly places the named pipe on the datastore where the virtual machine resides. In each of these cases, if the datastore is a vmfs volume, the serial port will not function properly as vmfs does not support named pipes. This will continue to work for NFS datastores.

    Workaround: Use NFS datastores to place your serial port named pipes or else directly modify the configuration file for the virtual machine to locate the named pipe on the console file system. To modify the configuration file directly, locate the following entry: serial0.fileName = "foo" and change the file name here. After doing so, unregister and reregister the virtual machine to allow for the changes to be picked up.

  • Re-editing a virtual machine before a previous edit completes might fail temporarily

    Description: If a virtual machine edit operation is started and another is quickly started before the first one completes, the operation might return an error message preventing concurrent access on the virtual machine.

    Workaround: Re-try the edit operation a few seconds later.

  • Cloning a Windows Virtual Machine Can Fail if the Virtual Machine Has a Nonstandard BootExecute Value

    Description: Every time Windows restarts, autochk.exe is called by the kernel to scan all volumes to check whether or not the volume dirty bit is set. This setting is specified in the BootExecute registry key (located in the registry at HKEY_LOCAL_MACHINE\SYSTEM\CURRENTCONTROLSET\CONTROL\Session Manager). The customization process takes advantage of this to run at startup and customize the virtual machine. If this value is not set to run autochk.exe, customization will not be able to run on the virtual machine when the virtual machine is powered on. The result will be an uncustomized Windows virtual machine.

    Workaround: Set the BootExecute registry key value to the standard Windows value, which runs autochk.exe on startup.

  • If no DHCP Server is Available When a Windows Virtual Machine is Customized, the Virtual Machine might Not Be Able to Join a Domain

    Description: When a virtual machine is customized with a static IP address, the customization process assigns the IP to the network card during final boot of the virtual machine. Joining a domain occurs before that (during second boot). If no DHCP server is available during second boot, network card does not get an IP address and miss out on the domain join. During third and final boot, the static IP addresses get assigned, but the domain join step is not performed again. The result is a virtual machine which is not part of the domain.

    Workaround: Make a DHCP server available so that the customized virtual machine can get a temporary IP address and join the domain before a static IP is assigned.

  • On Rare Occasions in Certain Fibre Channel Environments, VMotion does not Succeed and the Virtual Machine Remains Running on the Source Machine

    Description: This might occur with QLogic cards when there is a particular ESX Server in a crashed state or there is a disk array with a bad port that is powered on but does not allow any Fibre Channel log ins.

    To verify that you are having the most common version of this problem, on the VMotion destination esx Server machine, the VMX log file shows:

    vmx| MigrateStateUpdate: Transitioning from state 9 to 10.
    vmx| Unable to initialize swap file /vmfs/volumes/43e11658-
    vmx| Module Migrate power on failed.

    Explanation: When this occurs, a fabric event (host adds, switch adds, rezoning, and so on) has occurred during a VMotion. At the same time, there is another machine in the fabric that is powered on but not fully operational, or a bad port on a disk array. Due to issues with the QLogic driver on the healthy running machine, I/O to its storage is delayed during its rediscovery response to the fabric event and VMotion fails. This problem might also occur if any one machine or FC device somewhere in your FC fabric has a partially non-responsive failure mode.

    The VMotion typically fails because the destination virtual machine cannot power on. In the most common case, the swap file of the destination virtual machine cannot be opened. This results only in a VMotion failure; the virtual machine continues to execute on its source server. There is no loss of data or availability of the virtual machine.

    Workaround: None at this time.

  • For ESX Server 2.0.1 Hosts a Change of Power Options in virtual machineEdit Settings for a Powered on Virtual Machine Gives an Error Message

    Description: Changing the power options when the virtual machine is powered on causes an error. The error message reads "Operation not supported." This only happens for ESX Server 2.0.1 hosts.

    Workaround: Power off the virtual machine and then change the power options setting.

  • Intermittent Virtual Machine Failures With NIS-enabled ESX Server Hosts

    Description: Intermittent NetworkCopyFault errors might appear while cloning or migrating virtual machines on ESX Server hosts configured to use local authentication through an NIS server. This is usually seen only under heavy stress when reading user data from an NIS server.

    Workaround: Temporarily disable authentication with NIS on the hosts, and try your operations again.

Guest Operating System

  • Older Guest Operating System Exhibits Unusual Behavior

    Description: If you are using an old guest operating system in ESX Server 2 such as Red Linux 7.2, 7.3, 8.0, 9.0, or Suse Linux 8.2 or 9.x or FreeBSD, then moving the virtual machine to an ESX Server host version 3 can result in an unsupported state. Refer to the ESX Server Systems Compatibility Guide, or the VMware Guest Operating Systems Guide for a list of supported operating systems.

  • 64-bit Red Hat Enterprise Linux 4 hangs during installation with a black screen

    Description: On host machines with Intel EM64T VT-capable processors, you might encounter a black screen when installing 64-bit Red Hat Enterprise Linux 4 as a guest operating system.

    Near completion of the installation, the guest might appear to hang with a black screen.

    This problem has been observed to occur after the guest operating system has successfully been installed and the X server is starting up to display a login screen.

    Workaround: Pressing Alt-F7 should reveal the login screen, and you might proceed normally. If not, reset the guest. The operating system should be completely installed and working properly.

  • Guest Operating System Detects New USB Hardware after Migration

    Description: The guest operating system can detect an unchanged USB configuration as new hardware when older virtual machines are migrated to an ESX Server version 3 host. This usually occurs following one of these activities:

    • You power on the virtual machine in Legacy mode.
    • You upgrade the virtual hardware and then power on the virtual machine.

    The new USB controller is permanently added to the virtual machine.

    Workaround: None.

  • Clock falls behind real time in 64-bit Linux guest operating systems

    Description: When running a 64-bit Linux operating system in a virtual machine, you might observe that time in the guest runs slower than real time and continually falls farther behind.

    Workaround: At this time the best available workaround is to install VMware Tools in the guest and turn on the time synchronization option in the toolbox. VMware Tools will then check the guest's clock against host time once per minute and correct it if it is behind.

    Alternatively, you can set the vmkernel configuration variable.

    Method one, through the VI Client

    1. In the left panel, click on the host name.
    2. In the right panel, click on the Configuration tab.
    3. Under Software, double-click Advanced Settings. A new window pops up.
    4. In the left panel of the new window, click on Misc.
    5. Scroll down to Misc.TimerMinHardPeriod.
    6. Enter "400" in the typein field.
    7. Press OK.

    Method two, through the Service Console Misc.TimerMinHardPeriod to 400. From a COS shell, type:
    echo 400 > /proc/vmware/config/Misc/TimerMinHardPeriod
    If you choose this workaround, you have to redo it after every reboot.

  • Storage Drivers for the VMware LSILogic Storage Adapter Are Not Available in Windows XP

    Description: When you are trying to perform an unattended installation of Windows XP over the network using a PXE server, such as Altiris, the storage drivers for the VMware LSILogic storage adapter are not available, and the installation terminates with a BSOD. Drivers for this device are not available on the unattended installation image.

    Workaround: A Windows XP guest is created with LSIlogic as default SCSI controller. You must download the driver from the download center at the LSI Logic Web site. Go to www.lsilogic.com/support/ and look for the LSI20320 SCSI adapter driver for your guest operating system. For details on installing this driver, see the VMware ESX Server Administration Guide. The LSI Logic Web site also provides an Installation Guide for the LSI Logic Fusion-MPT™ Driver: SYMMPI.SYS V1.xx.xx, located (at the time of this publication) at www.lsilogic.com/files/support/ssp/fusionmpt/WinXP/symmpi_xp_12018.txt

  • Solaris warning "Stopped tracking Time of Day clock"

    Description: Solaris might report one of the following messages:

    WARNING: Time of Day clock error: reason [Stalled].
    -- Stopped tracking Time Of Day clock.
    WARNING: Time of Day clock error: reason [Jumped].
    -- Stopped tracking Time Of Day clock.

    Workaround: The warnings are harmless, and you can simply ignore them. Alternatively, you can, do the following:

    1. suppress them by adding the following line to /etc/system in your VM:

      set tod_validate_enable = 0

    2. Install VMware Tools in your virtual machineand turn on time synchronization to ensure that time is accurate.

  • 64-bits Guest might not run with Dell PowerEdge 6850 Hosts Running ESX Server 3.0

    Description: Dell PowerEdge 6850 Hosts Running ESX Server 3.0 Need A03 BIOS to Install 64-bit Guests

    Workaround: Dell PowerEdge 6850 hosts running ESX Server 3.0 must have BIOS A03 (Date 3/15/06) to install 64-bit guest operating systems. Intel's Virtualization Technology (VT) must also be enabled in the BIOS (the default is disabled). To see which BIOS your Dell PowerEdge 6850 has:

    Please see the Dell web site at http://support.dell.com/support/downloads/format.aspx?releaseid=R119962&c=us&l=en&cs=&s=gen for download for your host. Follow Dell's instructions to upgrade your system's BIOS.

    To enable VT in the BIOS, follow the instruction in http://ftp.us.dell.com/bios/PE6850-BIOSA03.TXT.

  • suseSLES8/redhatELAS4.0/redhatELAS4.0WithU3-64 Fail Install With a Black Screen

    Description: On host machines with Intel EM64T VT-capable processors, you might encounter a black screen when you have completed installing 64-bit Red Hat Enterprise Linux 4 as a guest operating system.

    At the last step of the installation, the guest might crash to black screen. This problem has been observed to occur after the guest operating system has successfully been installed. Reset the guest. The operating system should be installed and working properly.

    Workaround: Press ALt-F7 solves several instances of this black screen.

  • Solaris 10 Update 1 Spew after Booting

    Description: A Solaris 10 Update 1 virtual machine experiences a spew after booting up, with vmware.log entries in the form VIDE: Missize IN 0x174 (and OUT). This is a known Solaris bug: #6373475, and is known not to cause any harm. Sun's issue report is at: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6373475.

    Workaround: A patch is expected from Sun to address this issue.

  • Insufficient Memory Causes Guest Reboot, Hang, or Triple Fault Panic on Solaris 10 Update 1 Installation or Upgrade

    Description: Running a Solaris 10 Update 1 guest with less than the required minimum memory can result in a triple fault panic or cause the guest to hang or reboot, accompanied by the error message vcpu-0:ASSERT vmcore/vmm/cpu/segment.h:444 bugNr=19580.

    Workaround: There is no workaround. For successful operation, Solaris 10 virtual machines must meet the following Solaris 10 requirements. For x86-based systems:

    • Starting with the Solaris 10 1/06 release, Sun recommends 512 MB of memory. 256 MB is the minimum requirement.
    • For the Solaris 10 3/05 release, Sun recommends 256 MB of memory. 128 MB is the minimum requirement.

    Before upgrading a virtual machine’s guest operating system to the Solaris 10 1/06 release or later, increase the virtual machine’s RAM to at least 256 MB. See your VMware product documentation for instructions. For more information see the System Requirements and Recommendations for Solaris 10 Installation, on the Sun Web site at: http://docs.sun.com/app/docs/doc/817-0544/6mgbagb0v?a=view.

  • Display and Mouse Problems in Solaris 10 Virtual Machines

    Description: Virtual machines running Solaris 10 Update 1 might display incorrectly, and the mouse might not track properly. These problems occur because Solaris 10 Update 1 includes the Xorg Xserver release 6.8, which does not contain updated VMware mouse and vga drivers.

    Workaround: Install VMware Tools or Xorg 6.9. This will update the VMware mouse and vga drivers.

  • Excessive Network Error Messages Detected in Solaris 10 Virtual Machine

    Description: In 32-bit Solaris guests, under a heavy network load, the Solaris kernel might periodically write the following message from the pcn network adapter driver to the system log (/var/adm/messages):
       NOTICE: pcn: possible RX frame corruption
    This message is not relevant to a virtual machine environment and can be safely ignored.

    Workaround: Install VMware Tools. This changes the guest from the pcn network adapter driver to the vmxnet driver, which does not have this issue.

  • Installation of 64-bit Red Hat Enterprise Linux 3,6 and 8 SMP Kernel Occasionally Fails with an Anaconda ImportError Message

    Description: This failure might also be accompanied by any of a number of error messages indicating that a library is missing. The following error messages have been observed:

    • ImportError: libgssapi_krb5.so.2: cannot open shared object file: no such file or directory
    • /usr/bin/python2.2: error while loading shared libraries: libdl.so.2: cannot open shared object file: Error 6 install exited abnormally
    • exec of anaconda failed: Permission deniedinstall exited abnormally
    • /usr/bin/python2.2: error while loading shared libraries: /lib64/libc.so.6: cannot read file data: Invalid argument
    • exec of anaconda failed: Permission deniedinstall exited abnormally

    Workaround: Retry the installation.

  • Red Hat Enterprise Linux Advanced Server 3.0 64-bit Guest Produces Error on Intel VT Host

    Description: When you install Red Hat Enterprise Linux Advanced Server 3.0 x86-64 or Red Hat Enterprise Linux Advanced Server 3.0 x86-64 Update 1 on a 64-bit capable (VT) Intel host, the installer cannot complete, and returns the error: Your CPU does not support longmode.

    Red Hat Enterprise Linux Advanced Server 3.0 x86-64 and Red Hat Enterprise Linux Advanced Server 3.0 x86-64 Update 1 do not support Intel 64-bit CPUs. See the release notes for Update 2 on the Red Hat Web site at http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/release-notes/as-amd64/RELEASE-NOTES-U2-x86_64-en.html

    Support for Intel EM64T has been added to the Red Hat Enterprise Linux 3 Update 2 x86-64 distribution. This means that Intel processors with this technology are now supported, in addition to the previously supported AMD64 processors.

    Workaround: Install Red Hat Enterprise Linux Update 2 or later.

  • Blue Screen On Windows 2003 SP1 When Mounting Volumes

    Description: When running Windows 2003 SP1as a guest operating system, we've observed occassional blue screen failures during disk or volume mounting operations. This failure is rare in normal operation and is characterized by the following error code:

    STOP: 0x00000024

    This problem does not occur in other versions of Windows.

    Workaround: Apply the hotfix from MIcrosoft: http://support.microsoft.com/default.aspx/kb/910048 The listed file versions (from 01-Nov-2005, File version 5.2.3790.2560) don't actually fix the problem so you need to ask Microsoft Support for the updated hotfix.

  • NEW 5/23/06 64-bit Linux Guests with Computer Using More Than 4 GB of Memory Might Experience a Guest or vmware-vmx Panic

    Description: 64-bit Linux guests essentially require computer to have IOMMU unit if there is more than 4 GB of memory. Unfortunately our virtual machines do not have IOMMU and thus exhibit problems like random guest or vmware-vmx panic

    Workaround: Generally:

    • For RHEL3 update to the latest RHEL3 update available.
    • For SLES8/SuSE9.0 upgrade to SLES9/SuSE9.1 or newer.

    Specifically, there are three workarounds depending upon the Linux version used.
    Category 1: For kernels 2.4.25 or older, and 2.6.0 - 2.6.3 (unless they are explicitly mentioned below (RHEL3)):

    Provide the kernel option iommu=off. This option forces the guest to not use (non-existing) IOMMU. You might see a guest panic Kernel panic: pci_map_single: high address but no IOMMU. It means that some of device drivers in the guest cannot cope with more than 4 GB without IOMMU. Upgrade to one of kernels mentioned below or limit your virtual machine to less than 4 GB of memory.

    Category 2: For kernels 2.4.26 and newer, and for 2.6.4 kernel (unless they are explicitly mentioned below (SUSE9.1 without updates)):

    Provide the kernel option iommu=off and make sure that your kernel was built with SWIOTLB support. Once the system boots you can verify that SWIOTLB is used by executing the following commands:
    dmesg | grep SWIOTLB
    PCI-DMA: Using software bounce buffering for IO (SWIOTLB)

    PCI-DMA: Using SWIOTLB or simillar message should appear in the kernel log). Using iommu=off option is optional for a kernel with SWIOTLB support. But if your kernel was built without SWIOTLB support and you did not use the iommu=off option, a silent memory corruption might occur instead of panic (pci_map_single: ... as shown above).

    Category 3: For kernels 2.6.5 and newer:

    Provide the kernel option iommu=soft and make sure that your kernel was built with SWIOTLB support. Once system boots you can verify this by executing the command:
    dmesg | grep SWIOTLB

    PCI-DMA: Using SWIOTLB or simillar message should appear in the kernel log). Using iommu=off option is optional for a kernel with SWIOTLB support. But if your kernel was built without SWIOTLB support and you did not use the iommu=off option, a silent memory corruption might occur instead of panic (pci_map_single: ... as shown above).

    Category 4: Exceptions and Known Guests:

    • SLES8, SuSE9.0 — Category 1, cannot function with > 4 GB guests.
    • RHEL3, RHEL3U1 (based on kernels 2.4.21 before introducing EM64T support): — Category 1, cannot function with > 4 GB guests.
    • RHEL3U2..RHEL3U6, ia32e flavor — Category 2, does function with > 4 GB guests.
      Two other flavors, smp and up are Category 1 and do not function with > 4 GB guests because they are built without SWIOTLB support.
      It does work on EM64T hosts out of the box. Though an installation on an AMD box passes, after the first reboot the guest crashes as the installer installs -up or -smp kernel instead of -ia32e one which is used on the installation CD.
    • Ubuntu 5.04, 5.10 — Category 3, OK.
    • SLES9, SUSE9.1, SUSE9.2, SUSE9.3, SUSE10 (+SP/updates) — Category 3, OK.
    • RHEL4 — Category 3, OK.

VMware Tools

    There are no VMware Tools related issues at this time.

VirtualCenter Services

General Resource Management

  • VMotion fails to migrate virtual machinewhen migrating to esx running version ESX Server 2.5.3 or below

    Description: VMotion of a virtual machinemight fail when migrating when the target ESX Server is running version ESX Server 2.5.3 or below. The log file for the virtual machine would show an error message as VMware ESX Server unrecoverable error: Action added when prohibited

    Workaround: Use VMXnet or have the virtual machine NIC connected and have network activity in the guest OS.

  • Some Virtual Machines Might Not Power on Even if Their Reservation Is Less Than the Unreserved Capacity of Their Parent Resource Pool

    Description: A virtual machine will only be allowed to power on if its reservation is less than or equal to its number of virtual cpus multiplied by the MHz rating for a single processor core on a host. This is to guarantee that the virtual machine receives its full specified reservation. A host must have sufficient aggregate unreserved capacity to satisfy the reservation, as well as sufficient per-core capacity to satisfy the reservation given the virtual machine's number of virtual cpus.

    For example, consider a SMP virtual machine with 2 virtual cpus, and a reservation of 5 GHz, and a host with 4 physical cores, each rated at 2GHz. Although the host might have sufficient aggregate capacity to supply 5GHz to the virtual machine, each physical core can supply only 2 GHz, so a virtual machine with 2 virtual cpus could not receive more than 2 x 2 GHz = 4 GHz, and the virtual machine is not allowed to power on. In order to power on, the host must have cores rated at 2.5 GHz or faster, so that it can use 5GHz running two virtual cpus.

    Workaround: This check (that the virtual machine reservation should be <= # vcpus x single core MHz rating) can actually be disabled by setting the per-host configuration option Cpu/VMAdmitCheckPerVcpuMin to 0.

    Note: this is not recommended as it could lead to situations where virtual machines do not receive their full specified reservations.

VMware DAS and VMware HA

  • After moving a host out of a cluster, the inventory view might not show some of that host's virtual machines

    Description: When a host is moved out of a cluster, it might appear to "lose" some or all of its virtual machines in the left-hand inventory tree display. If this happens, the host and the virtual machines have not actually been affected, but the display is inaccurate.

    Workaround: Select any object in the left-hand inventory tree display, then press F5 to refresh the display.

Miscellaneous

  • Hewlett Packard DL760 G2 Servers Issue Warnings Whenever You Load the VMkernel

    Description: When you load the VMkernel on a DL760 G2 Server, the server issues the following warning messages:

    WARNING: ACPI: 788:bspNodeID not found
    WARNING: MPS: 262: No IOAPIC ID 9 for int entry
    WARNING: Chipset: 737: bus 1 isn't present

    These warning messages are benign and you can ignore them.

    Workaround: None.

  • Auto-start is not run when manually restarting a host in maintenance mode

    Description: In order to shutdown or restart an esx, the host should be placed into maintenance mode to prevent dataloss to virtual machines. Upon restart, however, an esx will remain in maintenance mode and the auto-start manager will not be invoked. If an ESX Server host unexpectily reboots or is power-cycled without being in maintenance mode, then the auto-start manager will still be used to re-start the virtual machines.

    Workaround: Manually restart the virtual machines.

  • MSCS: Upgrades Not Supported for RC

    Description: Upgrades of Microsoft Cluster Service from ESX Server 2 to ESX Server 3.0 are not currently supported. RC users must perform a fresh install, as discussed in the document Setup for Microsoft Cluster Service.

    Workaround: None.