VMware

Details of What's New and Improved in VMware Infrastructure 3 version 3.5

This section covers the following topics:

Feature Details

Effective Datacenter Management

  • Guided Consolidation—Guided Consolidation is an enhancement to VirtualCenter that guides new virtualization users through the consolidation process in a wizard-based, tutorial-like fashion. Guided Consolidation leverages capacity planning capabilities to discover and analyze physical systems. Integrated conversion functionality transforms these physical systems into virtual machines and intelligently places them on the most appropriate VMware ESX Server hosts. Guided Consolidation makes it easy for first-time virtualization users to get started and achieve the benefits of server consolidation faster than ever before. Use Guided Consolidation in smaller server environments.

    The guided server consolidation interface also includes the following:

    • An easy installation process for VirtualCenter 2.5.
    • An unlicensed evaluation period during which users have access to all VMware Infrastructure 3 features.
    • Context sensitive instructions, tutorials, and active links for effortlessly walking the user from zero to running virtual machines on VMware Infrastructure.
  • VMware Converter Enterprise Integration—VirtualCenter 2.5 provides support for integrated Physical-to-Virtual (P2V) and Virtual-to-Virtual (V2V) functionalities through an add-on component called VMware Converter Enterprise for VirtualCenter 2.5. This integration ensures seamless migrations of various types of physical and virtual machines to VMware Infrastructure 3. It also includes support for scheduled and scripted conversions, support for Microsoft Windows Vista conversions, and restoration of virtual disk images that are backed up using VCB, all within the VI Client. For additional details on VMware Converter Enterprise for VirtualCenter 2.5, refer to the VMware Converter Enterprise Release Notes.
  • Distributed Power Management (experimental)—VMware Distributed Power Management (DPM) reduces power consumption by intelligently balancing a datacenter's workload. Distributed Power Management, which is part of VMware DRS, automatically powers off servers whose resources are not immediately required and returns power to these servers when the demand for compute resources increases again.

    Distributed Power Management is an optional enhancement to DRS that is experimentally supported in this release. When cluster utilization is low, DPM-enabled DRS consolidates workloads within the cluster onto fewer ESX Server hosts and recommends that certain ESX Server hosts be powered off. When additional capacity for running workloads within the cluster is needed, DPM-enabled DRS recommends powering on ESX Server hosts and rebalances workloads across the powered-on hosts in the cluster. DPM-enabled DRS ensures powered-on capacity is available for any VMware HA settings.

    DPM can operate in automatic or manual mode with respect to the DRS cluster, and it can be disabled or its mode overridden on a per-host basis. DPM can be enabled in a VMware DRS cluster on any ESX Server host that has the appropriate hardware support and configuration. The NICs used by the VMkernel network must have Wake-on-LAN functionality, which is used to bring an ESX Server host up from a powered-off state. Best practice is to test the wake capability on each ESX Server host on which DPM is to be enabled.

  • Firewall Configurations on ESX Server Hosts through VI Client—Administrators can now configure firewall configurations on ESX Server hosts through the VI Client.
  • Image customization for 64-bit guest operating systems—Image customization provides administrators with the ability to customize the identity and network settings of a virtual machine's guest operating system during virtual machine deployment from templates. VirtualCenter 2.5 provides support for image customization of the following 64-bit guest operating systems:

    • Windows Vista
    • Windows XP
    • Windows Server 2003 Enterprise SP1
    • Windows Server 2003 Enterprise R2
    • Red Hat Enterprise Linux 4.5
    • Red Hat Enterprise Linux 5.0
    • SUSE Linux Enterprise Server 10 SP1/SP2
  • Provisioning across datacenters—VirtualCenter 2.5 allows you to provision virtual machines across datacenters. As a result, VMware Infrastructure administrators can now clone a virtual machine on one datacenter to another datacenter. You can also clone a virtual machine on one datacenter to a template on another datacenter. Templates can now be cloned between datacenters. You can also perform a cold migration of a virtual machine across datacenters.
  • Batch installation of VMware Tools—VirtualCenter 2.5 provides support for batch installations of VMware Tools where VMware Tools can now be updated for selected groups of virtual machines. VMware Tools upgrades can now be scheduled for the next boot cycle. If a VMware Tools update requires a virtual machine reboot, the VMware Infrastructure administrator is notified of the same within the VI Client.
  • Datastore browser—This release of VMware Infrastructure 3 supports file sharing across hosts (ESX Server 3.5 or ESX Server 3i) managed by the same VirtualCenter Server. Virtual machines can be cut and pasted from a datastore attached to an ESX Server 3 host to a datastore attached to an ESX Server 3i host, if both hosts are managed by the same VirtualCenter Server. Virtual machines can also be moved between datastores of hosts of the same type, if both hosts are managed by the same VirtualCenter Server. Alternatively, ESX Server 3.5 virtual machines exported from a datastore using the Download from Datastore Option can now be uploaded to datastores on ESX Server 3i hosts.

    Manage organization and registration of virtual machines at the root level of the datastore. Refer to the VI Client online Help for more information on working with files on datastores and registering virtual machines.

  • Open Virtual Machine Format (OVF)—The Open Virtual Machine Format (OVF) is a virtual machine distribution format that supports sharing of virtual machines between products and organizations. A VI Client version 2.5 allows you to import and generate virtual machines in OVF format through the File > Virtual Appliance > Import/Export menu items. For VMware Workstation, VMware Player, and VMware Fusion, you can use the OVF Tool to convert OVF packages to VMware format and from VMware format to OVF. See the VMware OVF Tool document for information about the OVF Tool, which is currently experimental.

Mainframe-class Reliability and Availability

  • Storage VMotion—Storage VMotion allows IT administrators to minimize service disruption due to planned storage downtime previously incurred for rebalancing or retiring storage arrays. Storage VMotion simplifies array migration and upgrade tasks and reduces I/O bottlenecks by moving virtual machines to the best available storage resource in your environment.

    Migrations using Storage VMotion must be administered through the Remote Command Line Interface (Remote CLI), which is available for download at the following location: http://www.vmware.com/download/download.do?downloadGroup=VI-RCLI.
  • Update Manager—Update Manager automates patch and update management for ESX Server hosts and select Microsoft and Linux virtual machines. Update Manager addresses one of the most significant areas of difficulty for every IT department: tracking patch levels and manually applying the latest security and bug fixes. Patching offline virtual machines is unique to virtual environments and enforces higher levels of compliance to patch standards than physical environments. Integration with DRS ensures zero-downtime ESX Server host patching capabilities.

    Update Manager allows users to define a baseline patch and update level for compliance checking and to scan entire datacenters or individual ESX Server hosts and virtual machines against the baseline. Update Manager also reports on the compliance status and provides remediation through scheduled tasks or manual intervention. It automates virtual machine snapshots prior to patching and provides rollback if the patching fails.

    For additional details on VMware Update Manager, refer to the Update Manager Release Notes.

  • VMware High Availability (HA) enhancements—Enhanced HA provides experimental support for monitoring individual virtual machine failures. VMware HA can now be set up to either restart the failed virtual machine or send a notification to the administrator.
  • VMotion with local swap files—Currently, swap files for virtual machines must be stored on shared storage for VMotion. VMware Infrastructure 3 now provides for swap files to be stored on local storage while still facilitating VMotion migrations for these virtual machines. Users can configure a swap datastore policy at the host or cluster level, although the policy can be overwritten by the virtual machine configuration.

    During a VMotion migration or a failover for virtual machines with swap files on local storage, if local storage on destination is selected, the virtual machine swap file is recreated. The creation time for the virtual machine swap file depends on local disk I/O (or if too many concurrent virtual machines are starting due to an ESX Server host failover with VMware HA).

    VMotion migration of virtual machines with local swap files is supported only across ESX Server 3.5 hosts and later with VirtualCenter 2.5 and later.

Platform for any Operating System, Application, or Hardware

  • Management of up to 200 hosts and 2000 virtual machines—VirtualCenter 2.5 can manage many more hosts and virtual machines than previous releases, scaling the manageability of the virtual datacenter to up to 200 hosts and 2000 virtual machines.
  • Large memory support for both ESX hosts and virtual machines—ESX Server 3.5 supports 256GB of physical memory and virtual machines with 64GB of RAM. Upon booting, ESX Server 3.5 uses all memory available in the physical server. If you have a physical server with more memory than 256GB and would like to have support from VMware, physically remove some RAM so that it is no greater than 256GB.
  • ESX Server hosts support for up to 32 logical processors—ESX Server 3.5 fully supports systems with up to 32 logical processors. Systems with up to 64 logical processors are supported experimentally.

    To enable experimental support for systems with up to 64 logical processors in ESX Server 3.5, run the following commands in the service console and reboot the system:

    # esxcfg-advcfg -k 64 maxPCPUS

    # esxcfg-boot -b

  • SATA support—ESX Server 3.5 supports selected SATA devices connected to dual SAS/SATA controllers. For a list of supported dual SAS/SATA controllers see the ESX Server 3.x I/O Compatibility Guide.

  • 10 Gigabit Ethernet support—Support for 10GigE cards is now available with ESX Server 3.5. You can test using any Neterion or NetXen 10GigE NIC. For NetXen NICs, the firmware must be version 3.4.115 or newer; the hardware must be Rev 25 or higher. For Neterion NICs, there is no firmware version requirement. Software-initiated iSCSI and NFS running over 10GigE cards is not supported in ESX Server 3.5.
  • NPIV support—ESX Server 3.5 introduces support for N-Port ID Virtualization (NPIV) for Fibre Channel SANs. Each virtual machine can now have its own World Wide Port Name (WWPN). You can use this feature to enable per-virtual-machine traffic monitoring using third party tools and chargeback. In a future release, NPIV will enable per-virtual-machine LUN masking capabilities. Both Emulex and QLogic NPIV HBAs are supported in the default drivers shipped with ESX Server 3.5.
  • Cisco Discovery Protocol (CDP) support—This release of VMware Infrastructure 3 incorporates support for Cisco Discovery Protocol (CDP) to help IT administrators better troubleshoot and monitor Cisco-based environments from within VirtualCenter 2.5 and the VI Client GUI. CDP allows VI administrators to know which Cisco switch port is connected to each virtual switch uplink (that is, each physical NIC). CDP enables the VirtualCenter and the VI Client GUID to identify the physical switch's properties such as switch name, port number, port speed/duplex settings, and so on. CDP support is enabled by default.

    On ESX Server 3.5, it is additionally possible to configure CDP so that information about the physical NIC (vmnic) name and ESX Server host names are passed upwards to Cisco switches. This configuration occurs through the CLI in the service console.

  • NEW: NetFlow support (experimental)—NetFlow is a networking tool with multiple uses, including network monitoring and profiling, billing, intrusion detection and prevention, networking forensics, and Sarbanes-Oxley compliance. NetFlow sends aggregated networking flow data to a third-party collector (an appliance or server). The collector and analyzer report on various information such as the current top flows consuming the most bandwidth in a particular virtual switch, which IP addresses are behaving irregularly, and the number of bytes a particular virtual machine has sent and received in the past 24 hours. NetFlow enables visibility into virtual machine traffic for ESX deployments.

    NetFlow support in ESX Server 3.5 is experimental. For more information, refer to the technical note on NetFlow support.

  • Paravirtualized guest operating system support with VMI 3.0—ESX Server 3.5 supports paravirtualized guest operating systems that conform to the VMware Virtual Machine Interface (VMI) 3.0. VMI is an open paravirtualization interface developed by VMware in collaboration with the Linux community (VMI was integrated into the mainline Linux kernel in version 2.6.22). VMware is also working with commercial Linux vendors to provide support for VMI in their existing distributions.

    ESX Server 3.5 provides full support for Ubuntu Linux 7.04 server and desktop versions, both of which target VMI. VMI is not, however, limited to Linux. Operating system vendors are free to modify their operating system to use the VMI interface that delivers improved guest operating system performance and timekeeping when running in virtual machines. While the applications running on a paravirtualized operating system require no modification whatsoever, they gain the performance benefits delivered through VMI.

    To enable paravirtualization support for a virtual machine on VMware Infrastructure 3, you must edit the virtual machine hardware settings and select the check box related to paravirtualization support under Options > Advanced.

  • Large page size—In ESX Server 3.5, the VMkernel can now allocate 2MB pages to the guest operating system. The hypervisor has the ability to assign these 2MB machine pages to guest operating systems that request them. As a result, hypervisor performance is increased because of reduced memory management overhead. Guest operating systems that support 2MB pages achieve higher performance when backed with 2MB machine memory pages. This feature is turned on by default.
  • Enhanced VMXNET—Enhanced VMXNET is the next version of VMware's paravirtulized virtual networking device for guest operating systems. Enhanced VMXNET includes several new networking I/O performance improvements including support for TCP/IP Segmentation Offload (TSO) and jumbo frames. Additionally, Enhanced VMXNET includes support for both 32-bit and 64-bit guests.

    Enhanced VMXNET is supported only for a limited set of guest operating systems:

    • 32/64-bit versions of Microsoft Windows 2003 (Enterprise and Datacenter Editions)
    • 32/64-bit versions Red Hat Enterprise Linux 5.0
    • 32/64-bit versions SUSE Linux Enterprise Server 10
    • 64-bit versions Red Hat Enterprise Linux 4.0
  • TCP Segmentation Offload (TSO)—TCP Segmentation Offload (TSO) improves networking I/O performance by reducing the CPU overhead involved with sending large amounts of TCP traffic. TSO improves performance for TCP data coming from a virtual machine and for traffic, such as VMotion, that is sent out of the server. It is supported in both the guest operating system and in the ESX Server kernel TCP/IP stack.

    TSO is enabled by default in the VMkernel. To take advantage of TSO you must select Enhanced VMXNET or e1000 as the virtual networking device for the guest.

    In some cases, TSO hardware is leveraged. However, performance improvements related to TSO need not require NIC hardware support for TSO.

  • Jumbo frames—Jumbo frames allow ESX Server 3.5 to send larger frames out onto the physical network. The network must support jumbo frames (end-to-end) for jumbo frames to be effective. Jumbo frames up to 9KB (9000 bytes) are supported. Like TSO, jumbo frames are supported in both the guest operating system and in the ESX Server kernel TCP/IP stack.

    Before enabling jumbo frames, ensure the NIC or LOM supports jumbo frames. Check with your hardware vendor before enabling jumbo frames on your platform. VMware supports jumbo frames with the following vendors: Intel (82546, 82571), Broadcom (5708, 5706, 5709), Netxen (NXB-10GXxR, NXB-10GCX4), and Neterion (Xframe, Xframe II, Xframe E).

    To enable jumbo frames in a virtual machine, configure Enhanced VMXNET (supported on a limited number of guests) for the guest. Jumbo frames support is disabled by default in the VMkernel and requires CLI to enable. For more information on enabling jumbo frames, see the ESX Server 3 Configuration Guide.

    Jumbo frames are not supported for NAS and iSCSI traffic. They are limited to data networking only.

  • NetQueue support—VMware supports NetQueue, a performance technology that significantly improves performance in 10 Gigabit Ethernet virtualized environments.

    NetQueue requires MSI-X support from the server platform, so NetQueue support is limited to specific systems. Systems supported include IBM (X366, X460, X3850, X3950), Dell (2900, 2950, 1950), HP DL 585G2. NetQueue is currently supported on the Neterion 10 Gigabit Ethernet NICs (Xframe, Xframe II, Xframe E).

    NetQueue is disabled by default, and CLI is required to enable it.

    To enable NetQueue, perform the following steps:

    1. Add the line /vmkernel/netNetqueueEnabled = "TRUE" to /etc/vmware/esx.conf
    2. At the console, execute the following command:

      esxcfg-module -s "intr_type=2 rx_ring_num=8" s2io
    3. Reboot ESX Server for the changes to take affect.

    To disable NetQueue, perform the following steps:

    1. Remove the line /vmkernel/netNetqueueEnabled = "TRUE" from /etc/vmware/esx.conf
    2. At the console, execute the following command: esxcfg-module -s "" s2io
    3. Reboot ESX Server for the changes to take affect.
  • IOAT (experimental)—ESX Server 3.5 provides experimental support for Intel I/O Acceleration Technology (IOATv1). IOATv1 is a chipset on the motherboard that speeds up memory copies. ESX Server 3.5 takes advantage of this chipset, if present, when dealing with memory copies in the TCP/IP stack implementation and makes improvements in networking performance.
  • InfiniBand—As a result of the VMware Community Source co-development effort with Mellanox Technologies, ESX Server 3.5 is compatible with InfiniBand Host Channel Adapters (HCAs) from Mellanox Technologies. Support for this feature is provided by Mellanox Technologies as part of the VMware Third Party Hardware and Software Support Policy.

    Mellanox Technologies InfiniBand HCA device drivers are not shipped with ESX Server 3.5 but are available directly from Mellanox Technologies at the following URL: http://www.mellanox.com/products/ciov_ib_drivers_vi3-1.php.

  • Round-robin load balancing (experimental)—ESX Server 3.5 enhances native load balancing by providing experimental support for round-robin load balancing of HBAs. Round-robin load balancing can be configured to switch paths to the SAN based on the number of I/Os or megabytes sent down a given path between path switches. This feature can be enabled with default settings via the VI Client. While round-robin load balancing can be selected through VirtualCenter, only default configuration options are available. More advanced options are available via the CLI. For more information on using this feature, see the Round-Robin Load Balancing document.

Migrating to This Release

In addition to the new features highlighted above, other changes are implemented in this release of VMware Infrastructure 3. These changes are listed below and also documented in the appropriate guides. The information below assists in the planning of your VMware Infrastructure 3 deployment.

ESX Server 3.5 Changes

  • Service Console—The ESX Server 3.5 service console is a limited distribution of Linux based on Red Hat Enterprise Linux 3, Update 8 (RHEL 3 U8).

  • setinfo messages are limited to 1MB by default—The guest operating system processes send informational messages to the ESX Server host through VMware Tools. These messages are known as setinfo messages and can be of any length. For security reasons, the size of these messages is limited to 1MB by default. 1MB is sufficient for most cases, but you may change this value. Refer to the Security Deployments and Recommendations chapter in the ESX Server 3 Configuration Guide for more information.

  • Virtual Machine File Structure starting size must Be 1200MB or larger—VMFS is first configured as part of the ESX Server 3 installation. When you create a new VMFS3 volume, it must be 1200MB or larger. In ESX Server 3.0.1, this requirement was 600 MB. See the Installation Guide and the Upgrade Guide for more details. You can then customize VMFS as discussed in the ESX Server 3 Configuration Guide.

  • VCB port change—When you configure VMware Consolidated Backup (VCB), the port number to connect to VirtualCenter Server or ESX Server is now 443.

  • Multimedia application support for Virtual Desktop Infrastructure (VDI)—As an optional part of VMware Tools, VMware Infrastructure 3 includes the Wyse multimedia redirection DLL to provide support for multimedia applications in a VMware Virtual Desktop Infrastructure deployment. Multimedia applications (using MPEG1, MPEG2, MPEG4, and WMV content) can now deliver enhanced remote end-user experience and synchronized audio-video to Wyse V10L and V90L thin clients using RDP from a Windows XP Pro SP2-based virtual desktop hosted on VMware Infrastructure 3.

VirtualCenter 2.5 Changes

  • Manage active sessions—VirtualCenter 2.5 allows administrators to view all users logged in to a VirtualCenter Server, terminate any active sessions when necessary, and send a message to those logged in.
  • Manage remote console connections—You can now configure VirtualCenter 2.5 to set the maximum number of allowed console connections (0 to 100) to all virtual machines.
  • Manage database connections—You can control the maximum number of database connections to be created between VirtualCenter 2.5 and the database server in use.
  • Lockdown mode—VirtualCenter 2.5 provides administrators with the option to disable direct remote access to ESX Server 3 hosts as a root user after VirtualCenter 2.5 has taken control of a given host. This is called "lockdown mode." Enabling this mode ensures that the host is managed only through VirtualCenter 2.5. Certain limited management tasks can still be performed while in lockdown mode by logging in to the local console on the host as a non-root user.
  • Virtual machine swapfile location—VirtualCenter 2.5 provides support for configuring a default location for virtual machine swap files for all virtual machines on a given host or cluster. The administrator can choose to store the swap file in the same directory as the virtual machine configuration file or select a datastore specified by the host for storing the swapfile. If it is not possible to store the swap file in the datastore specified by the host, the swap file is stored in the same folder as the virtual machine.
  • Licensing—VirtualCenter 2.5 provides an unlicensed evaluation mode that doesn't require that you install and configure a license server while installing VirtualCenter 2.5 and ESX Server 3. Also, the license server installed with VirtualCenter 2.5 supports multiple license files in a directory. See the Installation Guide.
  • Expiration user account—VirtualCenter Server creates a user account (the VIM account) on ESX Server hosts that are under its management. VirtualCenter Server assigns a random password to this account and stores this password in its database. In VirtualCenter 2.5, the password automatically changes 30 days after it was last changed. The default expiration age can be changed using the option manager by assigning a different value to the key VirtualCenter.VimPasswordExpirationInDays.
  • VirtualCenter 2.5 plug-ins—VMware independently releases optional applications that you can install on top of VirtualCenter 2.5 to provide additional capabilities. For example, you can use the VMware Converter plug-in to convert physical machines into ESX Server 3 virtual machines. Refer to the Quick Start Guide.
  • Automating service behavior based on firewall settings—VirtualCenter 2.5 can automate services start up based on the status of firewall ports. Such automation helps ensure that services start if the environment is configured in a way that enables their function. See the ESX Server 3 Configuration Guide

Installation, Upgrade, and Migration Considerations

ESX Server 3.5 Installation

If you use iLO or DRAC to install ESX Server 3.5, you might encounter corruption problems if you use the Virtual CD installation method with systems under load. If you must use this method to install ESX Server 3.5, run the media test provided by the ESX Server 3.5 installer.

While installing ESX Server 3.5, the option to create a default network for virtual machines is selected by default. If you proceed with installing ESX Server 3.5 with this option selected, your virtual machines share a network adapter with the service console, which does not provide optimal security.

ESX Server 3.5 Upgrade

ESX Server 3.5 will not allow an upgrade from previously unsupported versions. ESX Server 3.5 Installation will prompt to perform a new installation only when a previously supported version of ESX Server is found. Please refer to the ESX Server 3.5 Installation Guide for installation requirements and ESX Server 3.5 Upgrade Guide for the Upgrade Support Compatibility Matrix.

You must upgrade VirtualCenter Server before you upgrade VMware ESX Server 3.5. Additionally, some upgrade procedures mentioned in the installation and upgrade guide must occur after you install VMware ESX Server 3.5. If you do not upgrade in the stages described in the Upgrade Guide, you might lose data and access to your servers.

Do not use the pre-upgrade script when upgrading from ESX Server 3.x. You can use the script only on ESX Server 2.x systems. Refer to the Upgrade Guide for more details.

While upgrading from ESX 2.x to ESX Server 3, you cannot revert from VMFS3 to VMFS2. Once upgraded, the VMFS3 volume is usable only with ESX Server 3.x hosts.

While upgrading from ESX 2.x to ESX Server 3, you cannot revert the virtual machine hardware version from VM3 to VM2. Once upgraded, a VM3-format virtual machine is usable only with ESX Server 3.x.

VirtualCenter Installation

If you select a VirtualCenter Server edition that doesn't correspond to your license type during installation, VirtualCenter Server fails to start. Make sure you select the appropriate edition of VirtualCenter Server during installation.

The VI Client installer installs Microsoft .NET Framework 2.0 on your machine. If you have an older version, the VirtualCenter Server installer upgrades your version to version 2.0.

VirtualCenter upgrade

Before upgrading VirtualCenter Server from version 1.2 to version 2.5, you must first upgrade your VirtualCenter Server to version 1.4.1.

If you are upgrading from VirtualCenter Server 1.1 or earlier, first upgrade to version 1.2, then to version 1.4.1. The VC database is not preserved unless you first upgrade to at least VirtualCenter Server 1.2.

Licensing

In centralized license server mode, license files are located at the following default location on the machine running the VMware license server: C:\Program Files\VMware\VMware License Server\Licenses. This is different from VirtualCenter 2.0, where the default location of the license file was C:\Documents and Settings\All Users\Application Data\VMware\VMware License Server\vmware.lic. This location no longer exists.

The license server does not support license files on a network share. Make sure to place your license files in a directory on a system where your license server is installed.

Single host license files are placed at the following default location on the machine running ESX Server 3: /etc/vmware/vmware.lic. In centralized license server mode, this file exists on the same location, but contains no license keys.

Manageability considerations

VMware does not support management of one ESX Server 3.5 host by multiple VirtualCenter Server machines. While safeguards exist, you might inadvertently find a host to be managed by VirtualCenter version 1 and version 2 servers at the same time. If so, shut down the version 1 server immediately or remove the host from the version 1 server to prevent corruption of virtual machines or the VirtualCenter database.

The Virtual Infrastructure Client 2.5 can coexist with 1.x versions, the GSX Server Client version 3.x, and the VMware remote console. Older VMware clients do not need to be removed.

VMFS3 partitioning

For best performance, use VI Client or Virtual Infrastructure Web Access to set up your VMFS3 partitions rather than the ESX Server 3.5 installer. Using VI Client or VI Web Access ensures that the starting sectors of partitions are 64K aligned, which improves storage performance.

VMotion

You cannot use VMotion to migrate a virtual machine with a guest operating system with 16GB of memory or more to ESX Sever 3.5 hosts or earlier. Resize the guest operating system memory or migrate to a compatible version of ESX Server 3.

Paravirtualization

When enabled in a virtual machine, VMI Paravirtualization uses one of the six available PCI slots. Refer to Basic System Administration for more details on paravirtualization.

Guided server consolidation

To use the Guided Server Consolidation feature in VirtualCenter 2.5 effectively, make sure that your VirtualCenter Server belongs to a domain rather than a workgroup. If assigned to a workgroup, your VirtualCenter Server might not be able to discover all domains and systems available on the network when using guided server consolidation. Refer to Basic System Administration for more information.

Distributed Resource Scheduler and Resource Pools

If a host is added to a cluster, you can no longer create child resource pools of that host. You can create child resource pools of the cluster if the cluster is enabled for Distributed Resource Scheduler (DRS).

The host must be in maintenance mode before you can remove it from the cluster.

Non-DRS clusters have no cluster-wide resource management based on shares. Virtual machine shares remain relative to each host.

VMware High Availability

All hosts in a VMware HA cluster must have DNS configured so that the short host name (without the domain suffix) of any host in the cluster can be resolved to the appropriate IP address from any other host in the cluster. Otherwise, the Configuring VMware HA task fails. If you add the host using IP address, then you must also enable DNS lookup (the IP address must be resolvable to the short host name).

When you configure VMware HA, a DNS server is required to resolve host names. However, once configured, VMware HA does not require DNS lookup to perform failover operations.

Product Compatibility

Lab Manager 2.5.1 does not support ESX Server 3.5.

Microsoft Cluster Service (MSCS) is not supported in this release of ESX Server 3i.