VMware

Details of What's New and Improved in VMware Infrastructure 3 version 3.5

The Details of What's New and Improved in VMware Infrastructure 3 cover the following topics:

Feature Details

Introducing ESX Server 3i Embedded


ESX Server 3i Embedded removes a major source of size and complexity from the ESX Server 3 product: the Linux-based service console. This enables VMware to achieve a core hypervisor that is just 32MB in on-disk footprint, but preserves all of the functionality of today's ESX Server 3 product, including VMware VMotion, VMware Distributed Resource Scheduler (DRS), VMware High Availability (VMware HA), VMware Consolidated Backup (VCB), VirtualCenter 2 manageability, and so on.

Similarities:

  • Management interfaces—Just as ESX Server 3 can be managed by graphical and programmatic interfaces such as VMware Infrastructure Client (both direct to ESX Server and to VirtualCenter), the VI SDK, and the VI Perl Toolkit, you can use these same interfaces to manage ESX Server 3i. With ESX Server 3i, the service console command-line interface is not available; however, this is replaced by the Remote Command-Line Interfaces (Remote CLIs).
  • Storage—ESX Server 3i supports all the same forms of shared storage that ESX Server 3 supports.
  • Performance—ESX Server 3i has essentially the same performance characteristics as ESX Server 3.
  • Virtualization compatibility—ESX Server 3i is compatible with its concurrent ESX Server 3 release. It can run the same virtual machines, share the same datastores, have VMotion and other features operate correctly between 3i and 3 machines, and so on. In short, it can fully coexist in the same cluster and interoperate with ESX Server 3.

Differences:

  • Servers supported—ESX Server 3i is supported on a subset of servers, relative to ESX Server 3.
  • DCUI-Direct Console User Interface (DCUI)—ESX Server 3i includes a new BIOS-menu-like local user interface for basic configuration and troubleshooting (for example, network and security configuration, log file viewing).
  • Auto-configuring—By default, ESX Server 3i auto-configures on first boot with DHCP networking, a default username and password, and auto-partitioning and formatting of any _blank, local disk.
  • Updating—ESX Server 3i is small enough that it is updated and maintained at a full-image level.
  • Server hardware health monitoring—ESX Server 3i supports new, agent-less interfaces for standardized hardware health monitoring. The interfaces are based on the Common Information Model's Systems Management Architecture for Server Hardware interface (CIM SMASH) standard and are accessible by third parties for use. The VI Client and VirtualCenter Server now capture and display this same hardware health information.
  • Scripting interfaces and scripted installation—Whereas users of ESX Server 3 often wrote scripts in the Linux-based service console (either for initial configuration or for later configuration change), users of ESX Server 3i can now use a new Remote CLI set of tools to run those same scripts. The scripts can be run from a remote environment (for example, from a user's Windows or Linux laptop, desktop, or virtual machine).

Effective Datacenter Management

  • Guided Consolidation—Guided Consolidation is an enhancement to VirtualCenter that guides new virtualization users through the consolidation process in a wizard-based, tutorial-like fashion. Guided Consolidation leverages capacity planning capabilities to discover physical systems and analyze them. Integrated conversion functionality transforms these physical systems into virtual machines and intelligently places them on the most appropriate VMware ESX Server hosts. Guided Consolidation makes it easy for first-time virtualization users to get started and achieve the benefits of server consolidation faster than ever before. Use Guided Consolidation in smaller server environments.

    The guided server consolidation interface also includes these features:

    • An easy installation process for VirtualCenter 2.5.
    • An unlicensed evaluation period during which users have access to all VMware Infrastructure 3 features.
    • Context sensitive instructions, tutorials, and active links for initial system setup to running virtual machines on VMware Infrastructure.
  • VMware Converter Enterprise Integration—VirtualCenter 2.5 provides support for integrated Physical-to-Virtual (P2V) and Virtual-to-Virtual (V2V) functionalities through an add-on component called VMware Converter Enterprise for VirtualCenter 2.5. This integration ensures seamless migrations of various types of physical and virtual machines to VMware Infrastructure 3. It also includes support for scheduled and scripted conversions, support for Microsoft Windows Vista conversions, and restoration of virtual disk images that are backed up using VCB, all within the VI Client. For additional details on VMware Converter Enterprise for VirtualCenter 2.5, refer to the VMware Converter Enterprise Release Notes.
  • Distributed Power Management (experimental)—VMware Distributed Power Management reduces power consumption by intelligently balancing a datacenter's workload. Distributed Power Management, which is part of DRS, automatically powers off servers whose resources are not immediately required and returns power to these servers when the demand for compute resources increases again.

    Distributed Power Management (DPM) is an optional enhancement to DRS that is experimentally supported in this release. When cluster utilization is low, DPM-enabled DRS consolidates workloads within the cluster onto fewer ESX Server hosts and recommends that certain ESX Server hosts be powered off. When additional capacity for running workloads within the cluster is needed, DPM-enabled DRS recommends powering on ESX Server hosts and rebalances workloads across the powered-on hosts in the cluster. DPM-enabled DRS ensures powered-on capacity is available for any VMware HA settings.

    DPM can operate in automatic or manual mode with respect to the DRS cluster, and it can be disabled or its mode overridden on a per-host basis. DPM can be enabled in a DRS cluster on any ESX Server host that has the appropriate hardware support and configuration. The NICs used by the VMkernel network must have Wake-on-LAN functionality, which is used to bring an ESX Server host up from a powered-off state. Best practice is to test the wake capability on each ESX Server host on which DPM is to be enabled.

  • Remote Command-Line Interface—Remote Command-Line Interfaces (Remote CLIs) are available as part of the Remote CLI appliance or as an installable package for Linux or Microsoft Windows. Remote CLIs allow you to use scripts to automate setup and configuration of ESX Server 3i hosts.
  • Firewall Configurations on ESX Server Hosts through VI Client—Administrators can now configure firewall configurations on ESX Server hosts through the VI Client.
  • Image Customization for 64-bit Guest Operating Systems—Image customization allows administrators to customize the identity and network settings of a virtual machine's guest operating system during virtual machine deployment from templates. VirtualCenter 2.5 provides support for image customization of the following 64-bit guest operating systems:

    • Windows Vista
    • Windows XP
    • Windows Server 2003 Enterprise SP1
    • Windows Server 2003 Enterprise R2
    • Red Hat Enterprise Linux 4.5
    • Red Hat Enterprise Linux 5.0
    • SUSE Linux Enterprise Server 10 SP1/SP2
  • Provisioning Across Datacenters—VirtualCenter 2.5 allows you to provision virtual machines across datacenters. As a result, VMware Infrastructure administrators can now clone a virtual machine on one datacenter to another datacenter. You can also clone a virtual machine on one datacenter to a template on another datacenter. Templates can now be cloned between datacenters. You can also perform a cold migration of a virtual machine across datacenters.
  • Batch Installation of VMware Tools—VirtualCenter 2.5 provides support for batch installations of VMware Tools where VMware Tools can now be updated for selected groups of virtual machines. VMware Tools upgrades can now be scheduled for the next boot cycle. If a VMware Tools update requires a virtual machine reboot, the VMware Infrastructure administrator is notified within the VI Client.
  • Datastore Browser—This release of VMware Infrastructure 3 supports file sharing across hosts (ESX Server 3.5 or ESX Server 3i) managed by the same VirtualCenter Server. Virtual machines can be cut and pasted from a datastore attached to an ESX Server 3 host to a datastore attached to an ESX Server 3i host, if both hosts are managed by the same VirtualCenter Server. Virtual machines can also be moved between datastores of hosts of the same type, if both hosts are managed by the same VirtualCenter Server. Alternatively, ESX Server 3.5 virtual machines exported from a datastore using the Download from Datastore Option can now be uploaded to datastores on ESX Server 3i hosts.

    Organization and registration of virtual machines should be managed at the root level of the datastore. Refer to the VMware Infrastructure Client online Help for more information on working with files on datastores and registering virtual machines.

  • Open Virtual Machine Format (OVF)—The Open Virtual Machine Format (OVF) is a virtual machine distribution format that supports sharing of virtual machines between products and organizations. A VI Client version 2.5 allows you to import and generate virtual machines in OVF format through the File > Virtual Appliance > Import/Export and File > Virtual Appliance > Export menu items. For VMware Workstation, VMware Player, and VMware Fusion, you can use the OVF Tool to convert OVF packages to VMware format and from VMware format to OVF. For information about the experimental OVF Tool, see VMware OVF Tool technical note.

Mainframe-class Reliability and Availability

  • Storage VMotion—Storage VMotion allows IT administrators to minimize service disruption due to planned storage downtime previously incurred for rebalancing or retiring storage arrays. Storage VMotion simplifies array migration and upgrade tasks and reduces I/O bottlenecks by moving virtual machines to the best available storage resource in your environment.
  • Update Manager—VMware Update Manager automates patch and update management for ESX Server hosts and select Microsoft and Linux virtual machines. Update Manager addresses one of the most significant pain points for every IT department: tracking patch levels and manually applying the latest security and bug fixes. Patching offline virtual machines is unique to virtual environments and enforces higher levels of compliance to patch standards than physical environments. Integration with DRS ensures zero-downtime ESX Server host patching capabilities.

    Update Manager allows users to define a baseline patch and update levels for compliance checking and to scan entire datacenters or individual ESX Server hosts and virtual machines against the baseline. Update Manager also reports on the compliance status and provides remediation through scheduled tasks or manual intervention. It automates virtual machine snapshots prior to patching and provides rollback if the patching fails.

    For additional details on VMware Update Manager, refer to the Update Manager Release Notes.

  • VMware High Availability (VMware HA) Enhancements (experimental)—Enhanced VMware HA provides support for monitoring individual virtual machine failures. VMware HA can now be set up to either restart the failed virtual machine or send a notification to the administrator. For more information on this enhancement, see Virtual Machine Failure Monitoring.

    Support for VMware HA is experimental for ESX Server 3i. For more information on enabling and testing the VMware HA feature with ESX Server 3i, see VMware HA on ESX Server 3i.

  • VMotion with Local Swap Files—Currently, swap files for virtual machines must be stored on shared storage for VMotion. VMware Infrastructure 3 now provides for swap files to be stored on local storage while still facilitating VMotion migrations for these virtual machines. Users can configure a swap datastore policy at the host or cluster level, although the policy can be overwritten by the virtual machine configuration.

    During a VMotion migration or a failover for virtual machines with swap files on local storage, if local storage on destination is selected, the virtual machine swap file is recreated. The creation time for the virtual machine swap file depends on local disk I/O (or if too many concurrent virtual machines are starting due to an ESX Server host failover with VMware HA).

    VMotion migration of virtual machines with local swap files is supported only across ESX Server 3.5 hosts and later with VirtualCenter 2.5 and later.

Platform for any Operating System, Application, or Hardware

  • Management of up to 200 hosts and 2000 virtual machines—VirtualCenter 2.5 can manage many more hosts and virtual machines than previous releases, scaling the manageability of the virtual datacenter to up to 200 hosts and 2000 virtual machines.
  • Large memory support for both ESX hosts and virtual machines—ESX Server 3.5 supports 256GB of physical memory and virtual machines with 64GB of RAM. Upon booting, ESX Server 3.5 uses all memory available in the physical server. If you have a physical server with more memory than 256GB and would like to have support from VMware, physically remove some RAM so that it is no greater than 256GB.
  • ESX Server 3i host support for up to 32 logical processors—ESX Server 3.5 fully supports systems with up to 32 logical processors. Systems with up to 64 logical processors are supported experimentally.

    To enable experimental support for systems with up to 64 logical processors in ESX Server 3i:

    1. From your Remote CLI console, invoke vicfg-advcfg -k 64 maxPCPUS. You must set the connection parameters.
    2. Reboot the system.

    You can also enable experimental support using the VI Client.

    1. In the VI Client, select the ESX Server host and choose Configuration > Advanced Settings > VMkernel > Boot.
    2. Set the VMkernel.Boot.maxPCPUS field to 64 and click OK.
    3. Reboot the system.
  • SATA support—ESX Server 3.5 supports selected SATA devices connected to dual SAS/SATA controllers. For a list of supported dual SAS/SATA controllers see the ESX Server 3.x I/O Compatibility Guide.

  • 10 Gigabit Ethernet support—Support for 10GigE cards is now available with ESX Server 3.5. You can test using any Neterion or NetXen 10GigE NIC. For NetXen NICs, the firmware must be version 3.4.115 or newer; the hardware must be Rev 25 or higher. For Neterion NICs, there is no firmware version requirement. Software-initiated iSCSI and NFS running over 10GigE cards is not supported in ESX Server 3.5.
  • N-Port ID Virtualization (NPIV) support—ESX Server 3.5 introduces support for N-Port ID Virtualization (NPIV) for Fibre Channel SANs. Each virtual machine can now have its own World Wide Port Name (WWPN). You can use this feature to enable per-virtual-machine traffic monitoring using third party tools and chargeback. In a future release, it is possible that NPIV will enable per-virtual-machine LUN masking capabilities. Both Emulex and QLogic NPIV HBAs are supported in the default drivers shipped with ESX Server 3.5.
  • Cisco Discovery Protocol (CDP) support—This release of VMware Infrastructure 3 incorporates support for Cisco Discovery Protocol (CDP) to help IT administrators better troubleshoot and monitor Cisco-based environments from within VirtualCenter 2.5 and the VI Client. CDP allows VI administrators to know which Cisco switch port is connected to each virtual switch uplink (each physical NIC). CDP enables the VirtualCenter and the VI Client GUID to identify the physical switch's properties such as switch name, port number, port speed and duplex settings, and so on. CDP support is enabled by default.

  • Paravirtualized guest operating system support with VMI 3.0—ESX Server 3.5 supports paravirtualized guest operating systems which conform to the VMware Virtual Machine Interface (VMI) 3.0. VMI is an open paravirtualization interface developed by VMware in collaboration with the Linux community (VMI was integrated into the mainline Linux kernel in version 2.6.22). VMware is also working with commercial Linux vendors to provide support for VMI in their existing distributions.

    ESX Server 3.5 provides full support for Ubuntu Linux 7.04 server and desktop versions, both of which target VMI. VMI is not, however, limited to Linux. Any operating system vendor is free to modify their operating system to use the VMI interface that delivers improved guest operating system performance and timekeeping when running in a virtual machine. While the applications running on a paravirtualized operating system require no modification whatsoever, they gain the performance benefits delivered through VMI.

    To enable paravirtualization support for a virtual machine on VMware Infrastructure 3, you must edit the virtual machine hardware settings and select the check box related to paravirtualization support under Options > Advanced.

  • Large page size—The VMkernel can now allocate 2MB pages to the guest operating system. The hypervisor has the ability to assign these 2MB machine pages to guest operating systems that request them. As a result, hypervisor performance is increased because of reduced memory management overhead. Guest operating systems that support 2MB pages achieve higher performance when backed with 2MB machine memory pages. This feature is turned on by default.
  • Enhanced VMXNET—Enhanced VMXNET is the next version of VMware's paravirtulized virtual networking device for guest operating systems. Enhanced VMXNET includes several new networking I/O performance improvements including support for TCP/IP Segmentation Offload (TSO) and jumbo frames. Additionally, Enhanced VMXNET includes support for both 32-bit and 64-bit guests.

    Enhanced VMXNET is supported only for a limited set of guest operating systems:

    • 32/64-bit versions of Microsoft Windows 2003 (Enterprise and Datacenter Editions)
    • 32/64-bit versions Red Hat Enterprise Linux 5.0
    • 32/64-bit versions SUSE Linux Enterprise Server 10
    • 64-bit versions Red Hat Enterprise Linux 4.0
  • TCP Segmentation Offload (TSO)—TCP Segmentation Offload (TSO) improves networking I/O performance by reducing the CPU overhead involved with sending large amounts of TCP traffic. TSO improves performance for TCP data coming from a virtual machine and for traffic, such as VMotion, that is sent out of the server. It is supported in both the guest operating system and in the ESX Server kernel TCP/IP stack.

    TSO is enabled by default in the VMkernel. To take advantage of TSO you must select Enhanced VMXNET or e1000 as the virtual networking device for the guest.

    In some cases, TSO hardware is leveraged. However, performance improvements related to TSO need not require NIC hardware support for TSO.

  • Jumbo frames—Jumbo frames allow ESX Server 3i to send larger frames out onto the physical network. The network must support jumbo frames (end-to-end) for jumbo frames to be effective. Jumbo frames up to 9KB (9000 bytes) are supported. For ESX Server 3i, jumbo frames are supported in the guest operating system but not in the ESX Server kernel TCP/IP stack.

    Before enabling jumbo frames, ensure the NIC or LOM supports jumbo frames. Check with your hardware vendor before enabling jumbo frames on your platform. VMware supports jumbo frames with the following vendors: Intel (82546, 82571), Broadcom (5708, 5706, 5709), Netxen (NXB-10GXxR, NXB-10GCX4), and Neterion (Xframe, Xframe II, Xframe E).

    To enable jumbo frames in a virtual machine, configure Enhanced VMXNET (supported on a limited number of guests) for the guest.

  • Round-robin load balancing (experimental)—ESX Server 3.5 enhances native load balancing by providing experimental support for round-robin load balancing of HBAs. Round-robin load balancing can be configured to switch paths to the SAN based on the number of I/Os or megabytes sent down a given path between path switches. This feature can be enabled with default settings via the VI Client. While round-robin load balancing can be selected through VirtualCenter, only default configuration options are available. More advanced options are available via the CLI. For more information on using this feature, see the Round-Robin Load Balancing document.

Migrating to This Release

In addition to the new features highlighted above, other changes are implemented in this release of VMware Infrastructure 3 as compared to the previous release. These changes are listed below and also documented in the appropriate guides. The information below assists in the planning of your VMware Infrastructure 3 deployment.

ESX Server 3.5 Changes

  • setinfo messages are limited to 1MB by default—The guest operating system processes send informational messages to the ESX Server host through VMware Tools. These messages are known as setinfo messages and can be of any length. For security reasons, the size of these messages is limited to 1MB by default. 1MB is sufficient for most cases, but you may change this value. Refer to the Security Deployments and Recommendations section in the ESX Server 3i Configuration Guide for more information.

  • Virtual Machine File Structure starting size must be 1200MB or larger—For ESX Server 3i Embedded hosts, VMFS configuration is automatic when you first power on the host. When you create a new VMFS-3 volume, it must be 1200MB or larger. In ESX Server 3.0.1, this requirement was 600 MB. See the ESX Server 3i Embedded Setup Guide for more details. You can then customize VMFS as discussed in the ESX Server 3i Configuration Guide.

  • VCB port change—When you configure VMware Consolidated Backup (VCB), the port number to connect to VirtualCenter Server or ESX Server is now 443.

  • Multimedia application support for Virtual Desktop Infrastructure (VDI)—As an optional part of VMware Tools, VMware Infrastructure 3 includes the Wyse multimedia redirection DLL to provide support for multimedia applications in a VMware Virtual Desktop Infrastructure deployment. Multimedia applications (using MPEG1, MPEG2, MPEG4, and WMV content) can now deliver enhanced remote end-user experience and synchronized audio-video to Wyse V10L and V90L thin clients using RDP from a Windows XP Pro SP2-based virtual desktop hosted on VMware Infrastructure 3.

VirtualCenter 2.5 Changes

  • Manage active sessions—VirtualCenter 2.5 allows administrators to view all users logged in to a VirtualCenter Server, terminate any active sessions when necessary, and send a message to those logged in.

  • Manage remote console connections—You can now configure VirtualCenter 2.5 to set the maximum number of allowed console connections (0 to 100) to all virtual machines.

  • Manage database connections—You can control the maximum number of database connections to be created between VirtualCenter 2.5 and the database server in use.

  • Lockdown mode—VirtualCenter 2.5 provides administrators with the option to disable direct remote access to ESX Server 3 hosts as a root user after VirtualCenter 2.5 has taken control of a given host. This is called "lockdown mode." Enabling this mode ensures that the host is managed only through VirtualCenter 2.5. Certain limited management tasks can still be performed while in lockdown mode by logging in to the local console on the host as a non-root user.

  • Virtual machine swap file location—VirtualCenter 2.5 provides support for configuring a default location for virtual machine swap files for all virtual machines on a given host or cluster. The administrator can choose to store the swap file in the same directory as the virtual machine configuration file or select a datastore specified by the host for storing the swap file. If it is not possible to store the swap file in the datastore specified by the host, the swap file is stored in the same folder as the virtual machine.

  • Licensing—VirtualCenter 2.5 provides an unlicensed evaluation mode that doesn't require that you install and configure a license server while installing VirtualCenter 2.5 and ESX Server 3. Also, the license server installed with VirtualCenter 2.5 supports multiple license files in a directory. See the ESX Server 3i Embedded Setup Guide for more details.
  • Expiration user account—VirtualCenter Server creates a user account (the VIM account) on ESX Server hosts that are under its management. VirtualCenter Server assigns a random password to this account and stores this password in its database. In VirtualCenter 2.5, the password automatically changes 30 days after it was last changed. The default expiration age can be changed using the option manager by assigning a different value to the key VirtualCenter.VimPasswordExpirationInDays.

  • VirtualCenter 2.5 plug-ins—VMware independently releases optional applications that you can install on top of VirtualCenter 2.5 to provide additional capabilities. For example, the VMware Update Manager plug-in provides security monitoring and patching support for ESX Server hosts and virtual machines. In addition, you can use the VMware Converter plug-in to convert physical machines into ESX Server 3 virtual machines. Refer to the Quick Start Guide.

  • Automating service behavior based on firewall settings—VirtualCenter 2.5 can automate services start up based on the status of firewall ports. Such automation helps ensure that services start if the environment is configured in a way that enables their function. See the ESX Server 3i Configuration Guide.

Installation, Upgrade, and Migration Considerations

VirtualCenter Installation

If you select a VirtualCenter Server edition that doesn't correspond to your license type during installation, VirtualCenter Server fails to start. Make sure you select the appropriate edition of VirtualCenter Server during installation.

The VI Client installer installs Microsoft .NET Framework 2.0 on your machine. If you have an older version, the VirtualCenter Server installer upgrades your version to version 2.0.

VirtualCenter Upgrade

Before upgrading VirtualCenter Server from version 1.2 to version 2.5, you must first upgrade your VirtualCenter Server to version 1.4.1.

If you are upgrading from VirtualCenter Server 1.1 or earlier, first upgrade to version 1.2, then to version 1.4.1. The VC database is not preserved unless you first upgrade to at least VirtualCenter Server 1.2.

Licensing

In centralized license server mode, license files are located at the following default location on the machine running the VMware license server: C:\Program Files\VMware\VMware License Server\Licenses. This is different from VirtualCenter 2.0, where the default location of the license file was C:\Documents and Settings\All Users\Application Data\VMware\VMware License Server\vmware.lic. This location no longer exists.

The license server does not support license files on a network share. Make sure to place your license files in a directory on a system where your license server is installed.

Single host license files are placed at the following default location on the machine running ESX Server 3: /etc/vmware/vmware.lic. In centralized license server mode, this file exists on the same location, but contains no license keys.

Manageability Considerations

VMware does not support management of one ESX Server 3.5 host by multiple VirtualCenter Server machines. While safeguards exist, you might inadvertently find a host to be managed by VirtualCenter version 1 and version 2 servers at the same time. If so, shut down the version 1 server immediately or remove the host from the version 1 server to prevent corruption of virtual machines or the VirtualCenter database.

The Virtual Infrastructure Client 2.5 can coexist with 1.x versions, the GSX Server Client version 3.x, and the VMware remote console. Older VMware clients do not need to be removed.

VMFS-3 Partitioning

For best performance, use the VI Client rather than the ESX Server 3.5 installer to set up your VMFS-3 partitions. Using the VI Client ensures that the starting sectors of partitions are 64K aligned, which improves storage performance.

VMotion

You cannot use VMotion to migrate a virtual machine with a guest operating system with 16GB of memory or more to ESX Sever 3.5 hosts or earlier. Resize the guest operating system memory or migrate to a compatible version of ESX Server 3.

Paravirtualization

When enabled in a virtual machine, VMI Paravirtualization uses one of the six available PCI slots. Refer to Basic System Administration for more details on paravirtualization.

Guided Server Consolidation

To use the Guided Server Consolidation feature in VirtualCenter 2.5 effectively, make sure that your VirtualCenter Server belongs to a domain rather than a workgroup. If assigned to a workgroup, your VirtualCenter Server might not be able to discover all domains and systems available on the network when using guided server consolidation. Refer to Basic System Administration for more information.

Distributed Resource Scheduler and Resource Pools

If a host is added to a cluster, you can no longer create child resource pools of that host. You can create child resource pools of the cluster if the cluster is enabled for Distributed Resource Scheduler (DRS).

The host must be in maintenance mode before you can remove it from the cluster.

Non-DRS clusters have no cluster-wide resource management based on shares. Virtual machine shares remain relative to each host.

VMware High Availability (VMware HA) (experimental)

All hosts in a VMware HA cluster must have DNS configured so that the short hostname (without the domain suffix) of any host in the cluster can be resolved to the appropriate IP address from any other host in the cluster. Otherwise, the Configuring VMware HA task fails. If you add the host using IP address, then you must also enable DNS lookup (the IP address must be resolvable to the short host name).

When you configure VMware HA, a DNS server is required to resolve host names. However, once configured, VMware HA does not require DNS lookup to perform failover operations.

Product Compatibility

Lab Manager 2.5.1 does not support ESX Server 3.5.

Microsoft Cluster Service (MSCS) is not supported in this release of ESX Server 3i.