VMware
Details of What's New and Improved in VMware Infrastructure 3

New Features at the Platform Level

ESX Server 3.x new features are listed below.

  • NAS and iSCSI

    ESX Server 2.x could store virtual machines only on SCSI disks and on Fibre Channel SANs. ESX Server 3.x can store virtual machines on NAS and iSCSI as well, providing the benefits of Virtual Infrastructure to low-cost storage configurations. iSCSI LUNs, like Fibre Channel LUNs, can be formatted with the VMware file system (VMFS). Each virtual machine resides in a single directory. Network attached storage (NAS) appliances must present file systems over the NFS protocol for ESX Server to be able to use them. NFS mounts are used like VMFS with the ESX Server creating one directory for each virtual machine.

    Note: To use NFS or iSCSI, VMware recommends that you read the appropriate chapters in the "Server Configuration Guide." Many not obvious and essential configuration steps are explained there.

    iSCSI enabled through a software initiator (100% implemented as a software layer over TCP/IP) is fully supported in this release. iSCSI enabled through a hardware initiator (through a physical hardware iSCSI card) is also possible but is available as an experimental feature in this release.

  • Four-way Virtual SMP and 16 GB Memory Available to Guest Operating Systems

    Virtual machines can now have up to 4 processors (up from 2) and 16 GB of RAM (up from 3.6 GB) allocated to them.

    Note: Virtual machines with Linux "hugemem" kernels are not supported. See the "Systems Compatibility Guide" for details.

  • 64-Bit Guest Operating System Support

    With the appropriate server hardware, some 64-bit operating systems can run as guests inside the virtual machines on ESX Server 3.x.

    Targeted Operating Systems

    For information on which 64-bit guest operating systems VMware Infrastructure 3 supports, see the Systems Compatibility Guide for ESX Server 3.x.

    Hardware Requirements

    Specific hardware requirements exist for 64-bit guest operating system support. For AMD Opteron-based systems, the processors must be Opteron Rev E and later. For Intel Xeon-based systems, the processors must include support for Intel Virtualization Technology (VT). Many servers that include CPUs with VT support might ship with VT disabled by default, and VT must be enabled manually. You might also need to contact your vendor to request a BIOS version that allows you to enable VT support if your CPUs do support VT, but you do not see this option in the BIOS.

    To determine whether your server has the necessary support, use a CPU Compatibility Tool included on the ESX Server product ISO and available from http://www.vmware.com/download.

  • NX/XD CPU Security Features

    With the appropriate server hardware, the more recent guest operating systems are able to leverage AMD No eXecute (NX) or Intel eXecute Disable (XD) technologies. Both variants (available in most of the recent CPUs from Intel and AMD) improve security by marking memory pages as data only to prevent malicious software exploits and buffer overflow attacks. In ESX Server 2.x, these CPU features were hidden from virtual machines and unavailable for use by the virtual machines. In ESX Server 3.x, the NX/XD features are now exposed by default. The following guest operating system types are known to make use of NX/ND: Windows Server 2003 (SP1), Windows XP (SP2), Windows Vista, RHEL 4, RHEL 3 (Update 3), SUSE 10, SUSE Linux 9.2, and Solaris 10 Operating Environment.

  • Remote CD/Floppy

    Using either the Virtual Infrastructure (VI) Client or VI Web Access, it is possible to give a virtual machine access to a CD or floppy device from the client's machine. This means, for example, that a user could install a program in a virtual machine running on a remote ESX Server by putting a CD in a drive on a desktop or laptop machine.

  • Hot-Add Virtual Disk

    ESX Server 3.x supports adding new virtual disks to a virtual machine while it is running. This is useful with guest operating systems capable of recognizing hot-add hardware.

  • Raw Device Mapping (RDM)

    It is now possible for an ESX Server host to both utilize SAN-based RDMs and boot from the SAN, using the same Fibre Channel HBA. For example, this is useful if you are using array-based snapshotting or replication on LUNs attached to a boot-from-FC-SAN esx.

  • Official Support For 32-Bit Solaris 10 Operating Environment Update 1 as a Guest Operating System

    With this release, VMware introduces official support for 32-bit Solaris 10 Operating Environment Update 1 for x86 platforms. VMware Tools are available starting with this release, so VMware encourages you to fully explore the functionality of Solaris guests, including but not limited to Virtual SMP, >4GB memory, VMotion, VMware Tools' capabilities, and so on. Refer to the Guest Operating System Installation Guide for Solaris Operating System guest operating system specific installation instructions. Also, see the known issues section for other notes.

  • Automated upgrades of VMware Tools

    VMware Infrastructure 3 introduces the ability to automate the installation and upgrade of VMware tools for many virtual machines at the same time without the need to interact with each virtual machine. Detailed instructions are provided in the Installation and Upgrade Guide.

  • New Guest SDK available

    The VMware Guest API provides hooks that management agents and other software running in the guest operating system in a VMware ESX Server 3.x virtual machine can use to collect certain data about the state and performance of the virtual machine. The Guest SDK includes header files for the Guest API, complete documentation, and ample code.

  • Experimental VMware Descheduled Time Accounting Component (VMDesched)

    Starting with ESX Server 3.0, the new experimental VMware Descheduled Time Accounting component (VMDesched) is provided as an optional part of VMware Tools. In the current release, VMDesched is available only for uniprocessor Windows and Linux guest operating systems.

    Refer to the technical note "Improving Guest OS Accounting for Descheduled Virtual Machines" to learn how to install and monitor VMDesched on Linux and Windows guest operating systems.

  • Support for Guest ACPI S1 Sleep Allows You to Wake up a Sleeping Virtual Machine

    VMware Tools provides support for guest operating systems that enable ACPI S1 sleep. This feature requires you to have the latest version of VMware Tools installed.

  • VMware Experimental Feature Support Definition
  • VMware includes certain experimental features in some of our product releases. These features are there for you to test and experiment with. We do not expect these features to be used in a production environment. However, if you do encounter any issues with an experimental feature, we are interested in any feedback you are willing to share. Please submit a support request through the normal access methods. You will receive an auto-acknowledgement of your request. We cannot, however, commit to troubleshoot, provide workarounds, or provide fixes for these experimental features.

New Features in VirtualCenter

VirtualCenter 2.x new features are listed below.

  • Virtual Infrastructure Client

    In VMware Infrastructure 3, the VirtualCenter Client has been renamed the Virtual Infrastructure Client or VI Client to convey its ability to connect to a VirtualCenter Management Server or to individual ESX Servers. When connected to VirtualCenter 2.x, the VI Client provides full monitoring and management of multihost configurations and multihost functionality, such as VMotion, DRS, and HA. When connected to an individual ESX Server 3.x, the VI Client provides the subset of functionality needed to manage the virtual machines and configurations of single hosts. In both cases, the VI Client contains all of the functionality previously available through the separate Web-based Management User Interface (MUI) in ESX Server Server 2.x and now provides a single, consistent interface to configure ESX Server 3.x.

  • Virtual Infrastructure Web Access

    VI Web Access is a lightweight, browser-based application for managing and accessing virtual machines. The initial version included with VMware Infrastructure 3 is not a full peer to the VI Client interface, but it does contain all of the functionality needed to interact with virtual machines, including virtual machine configuration and the ability to interact with a remote virtual machine's mouse, keyboard, and screen entirely through a standard Web browser. Because of its zero client installation overhead, VI Web Access is ideally suited for users who only need to interact with a few virtual machines and don't need all of the advanced functionality provided by the VI Client. VI Web Access also allows VirtualCenter and ESX Server system environment administrators to create Web links that can be shared with users needing access to the virtual machines and allows the user interfaces presented on login to be customized for different users.

    See the VMware VI Web Access Administrator's Guide for additional details.

  • Topology Maps

    VI Clients connected to VirtualCenter contain graphical topology maps that display the relationships between objects in the inventory. These maps can be used to visually discern high load areas, to see a virtual machine's VMotion candidate hosts, to plan general datacenter management, and to export.

  • Licensing

    VMware Infrastructure 3 introduces new licensing mechanisms based on industry-standard FlexNet mechanisms. The first option is called "License Server Based" licensing and is intended to simplify license management in large, dynamic environments by allowing licenses to be managed and tracked by a license server. The second option is called "Host Based" licensing and is intended for smaller environments or customers preferring to keep ESX Servers decoupled from a license server.

    "License Server based" license files can be used to unify and simplify the license management of many separate VirtualCenter and ESX Server licenses by creating a pool of licenses centrally managed by a license server. Licenses for the VirtualCenter Management Server and for the VirtualCenter add-on features like the VirtualCenter Management Agent, VMotion, VMware HA, and VMware DRS are only available in this form, which has the following advantages:

    • Instead of maintaining individual serial numbers on every host and tracking all of them manually, the license server allows all licenses available and being used to be administered from a single location.
    • Future license purchasing decisions are simplified because the new licenses available might be allocated and re-allocated using any combination of ESX Server form factors. The same 32 processors worth of licenses, for instance, could be used for sixteen 2-ways, eight 4-ways, four 8-ways, or two 16-ways (or any combination thereof totaling 32 processors).
    • Ongoing license management is also simplified by allowing licenses to be assigned and re-assigned on an as-needed basis as the needs of an environment change, such as when hosts are added or removed or premium features like VMotion, DRS, or HA need to be transferred to different hosts.
    • During periods of license server unavailability, VirtualCenter and ESX Servers using served licenses will be unaffected by relying on cached licensing configurations for the duration of a 14-day grace-period even across reboots.

    "Host Based" license files are not centrally managed and not dynamically allocated but might be placed directly on individual ESX Server hosts. They provide an alternative use similar to the way in which serial numbers were used by ESX Server 2.x. Only the licenses for ESX Server and VMware Consolidated backup are available in this alternative form, which has the following benefits:

    • Unlike license server-based license files, the host-based variety does not require that a license server be installed for ESX Server only environments.
    • Host-based license files are completely independent from a license server, allowing ESX Server licenses to be added or modified during periods of license server unavailability.

    License server-based and host-based license files are being made available for download, but VMware encourages the use of the license server-based variety for usability and manageability. VirtualCenter installs a license server locally by default and its add-on features are required to use the license server-based variety, the easiest way to get up and running quickly is to:

    1. Generate a license server-based file containing all features.
    2. Install VirtualCenter, providing the license server-based file when prompted.
    3. Install and configure ESX Servers.

    For additional information, please refer to the licensing chapter of the Installation and Upgrade Guide.

  • Administrator and Active Sessions User Interface

    The VI Client has a new tab for administration-related tasks, such as managing roles and permissions, monitoring active user sessions, and reviewing license usage. The sessions interface provides controls for broadcasting a message of the day, viewing and managing all of the active users, and terminating user sessions.

  • Clusters

    VirtualCenter 2.x introduces the notion of a cluster of ESX Server hosts. A cluster is a collection of hosts that can, to a certain extent, be managed as a single entity. In particular, the resources from all the hosts in a cluster are aggregated into a single pool. From the resource management perspective, a cluster looks like a stand-alone host, but it would typically have a lot more resources available. Some of the key new technologies that make clusters powerful are VMware DRS, resource pools, and VMware HA.

  • VMware DRS

    VMware DRS is the technology that allows the resources from all the hosts in a cluster to be treated as a single aggregated pool. When changes occur in the environment, DRS can tune the resource scheduling on individual hosts as well as use VMotion to rebalance workload across hosts in the cluster. When a virtual machine is powered on, DRS calculates the optimal host on which to start it, given current resource levels and the resource configuration of the new virtual machine.

  • Resource Pools

    A resource pool provides a way to subdivide the resources of a stand-alone host or a cluster into smaller pools. A resource pool is configured with a set of CPU and memory resources that are shared by the virtual machines that run in the resource pool. Resource pools are typically used to delegate control over a precisely specified set of resources to a group or individual without giving access to the underlying physical environment. Resource pools can be nested. For example, a large resource pool could be controlled by an engineering organization out of which smaller resource pools are given to individual developers.

  • VMware HA

    VMware HA (HA) increases the availability of virtual machines by detecting host failures and automatically restarting virtual machines on other available hosts. HA operates on a set of ESX Server 3.x hosts that have been grouped into a cluster with HA enabled. Enabling HA requires almost no configuration. To get the full benefit of HA, the virtual machines that run on the cluster should be placed on shared storage that is accessible to all the hosts in the cluster.

  • Database Support

    The following databases are supported in this VirtualCenter Server release:

    • SQL Server 2000 (SP4 only)
    • MSDE Rel A (bundled with VirtualCenter for demonstration and evaluation purposes)
    • Oracle 9iR2
    • Oracle 10gR1 (versions 10.1.0.3 and higher only)
    • Oracle 10gR2 (all versions)

Other Changes at the Platform Level

ESX Server 3.x improvements are listed below.

  • Simpler, More Flexible Device Management

    The management of networking for the VMware Service Console has been significantly improved and reworked in ESX Server 3.x. The VMware Service Console now enjoys easy load balancing and failover of network and storage adapters and is managed through the same graphical user interface (VI Client) as virtual machines. Servers with limited numbers of NICs, such as blade servers, enjoy particular benefits in simplicity and flexibility.

    In ESX Server 2.x, a virtual machine might have had a single virtual NIC with two physical NICs behind it in a load-balanced and failover configuration through the use of a virtual switch, and the VMware Service Console behaved differently. It directly saw all the NICs that were dedicated to it; that is, there was no virtual NIC presented to the VMware Service Console and no virtual switch behind the virtual NIC. Only virtual machines had these. With ESX Server 3.x, the service console behaves and is configured and managed just like virtual machines. Now, just as with virtual machines, you use the VI Client to connect a service console to a virtual switch and then to physical NICs.

    In ESX Server 2.x, NICs assigned to the service console through the installer or the vmkpcidivvy command line were directly dedicated to the service console while NICs for virtual machines were dedicated to the VMkernel. To manage the service console's NICs, you used the command line, and to manage the virtual machine's NICs, you used the management user interface. In ESX Server 3.x, all NICs are dedicated to the VMkernel and are managed via the VI Client. There is no complex, install-time assignment of NICs to either the service console or to virtual machines.

    This new approach brings special benefit to servers with a very limited number of physical NICs, such as blade servers. They now enjoy improved flexibility, throughput, and failover behavior. Specifically, if a server has only two NICs, it is now possible to create a virtual switch that teams both NICs, achieving load-balancing and failover, and then operates all of ESX Server's services (service console, VMotion, NFS, iSCSI, and virtual machine KVM) through this single virtual switch and team of physical NICs.

    For those who use the optional command-line interface through the service console, this change in networking management also affects the output of the ifconfig command. In ESX Server 2.x, running ifconfig would display all the physical NICs that were assigned to the service console (eth0, eth1, and so on). In ESX Server 3.x, when running ifconfig you'll no longer see eth0. Instead, you'll see two items:

    • vmnic0 — This represents properties of the physical NIC. You'll see a MAC address, transmit and receive statistics, and other information associated with this physical device.
    • vswif0 — This represents the virtual network interface in the service console. You'll see similar items of information for vmnic0 (a MAC address, transmit and receive statistics, and so on) as well as Layer 3 information for the service console (IP address, netmask, and so on).

    For Storage (Particularly FC HBAs):

    Fibre Channel HBAs are also seeing some improvements in the way they are managed. The major benefit is that, in ESX Server 3.x, the service console enjoys the same path-failover behavior (multipathing) that is available to virtual machines. Particularly for ESX Servers that boot from the SAN, this is a significant improvement.

    In ESX Server 2.x, only one HBA could be shared effectively with the service console. In ESX Server 3.x, all HBAs are handled by the VMkernel, multipathing occurs within the VMkernel, and the service console and virtual machines enjoy the benefits of the multipathing. Also, there's no longer any complex, install-time assignment of HBAs to either the service console or to virtual machines.

    For Networking and Storage:

    The service console command-line interface vmkdivvy is no longer required. If some configuration of the NIC or HBA devices is necessary, it can be handled through the VI Client or through the SDK scripting interfaces and is not provided by the service console. Any scripts from ESX Server 2.x which attempted to manage and configure such devices using the command-line interface are likely not to work in ESX Server 3.x.

  • The VMkernel IP Networking Stack Has Been Extended
    The IP networking stack within ESX Server's VMkernel has been extended and now provides IP services to handle the following upper-level functions:

    • Storing virtual machines on iSCSI LUNs (new)
    • Storing virtual machines on NFS file servers
    • Reading ISO files from NFS servers to present them to virtual machines as CDs, for example, for installing guest operating systems
    • VMotion

    For instructions on configuring these, see the Server Configuration Guide.

  • New Version of the VMware File System: VMFS 3
    There is a new generation of VMFS in ESX Server 3.x. Scalability, performance, and reliability have all improved. Subdirectories are now supported. ESX Server creates a directory for each virtual machine and all its component files.

  • Swap Files and VMX Files Are Now on the Virtual Machine Datastore
    When there's insufficient physical memory to handle the needs of all the running virtual machines, ESX Server swaps a virtual machine out to disk. ESX Server 3.x has one swap file per virtual machine. In ESX Server 3.x, the swap file for each virtual machine is now located on the virtual machine datastore volume, for example, VMFS or NAS, in the same directory as the virtual machine's configuration file (the VMX file) and NVRAM file. Having all essential pieces of the virtual machine on the datastore increases reliability and is essential for features such as DRS and HA.

    Make sure that you allocate enough space on your VMFS or NAS volume to handle the additional swap files. If a virtual machine has 500MB of memory assigned to it, the swap file will have a default size of 500MB. This swap file is created when a virtual machine is powered on.

    If you are low on disk space, you might not be able to power on virtual machines, particularly large-memory virtual machines. The swap space required to power on a virtual machine is equal to its memory size minus its memory reservation. For example, a 16-GB virtual machine with a 4-GB memory reservation needs 4GB of swap space. A virtual machine with 16GB of memory reservation needs no swap space.

  • SAN Configuration Persistent Binding Functionality is no Longer Required
    In ESX Server 2.x, the management user interface showed three tabs for VMFS, persistent binding, and multipathing. In ESX Server 3.x, the VI Client does not have a persistent binding tab.

    The persistent binding function is no longer needed. Prior to ESX Server 2.5, the pbind tool was used because ESX Server kept track of LUNs, using their vmhbaC:T:L:P paths. RDM technology, introduced in ESX Server 2.5, used a volume's internal, unique SCSI identifiers. Because ESX Server 3.x uses those internal identifiers for LUNs containing VMFS volumes and the ESX Server boot disks, the persistent binding function is no longer required.

  • Snapshots are Manageable with the VI Client
    Snapshots now capture the entire virtual machine state, such as all disks, plus memory, processor, and other virtual machine state.

    ESX Server 2.x supported undoable, non-persistent, and append mode disks gave the ability to revert to a known disk state. In ESX Server 3.x, taking snapshots extends this functionality to include memory and processor state as well. Additionally, snapshots are configured on an entire virtual machine and no longer are per-disk. A per-disk behavior can be achieved by using an independent disk configuration option for each virtual machine.

  • vmres and vmsnap Scripts (Released in ESX Server 2.5) are Superceded by the VMware Service Console Version of VMware Consolidated Backup Command-Line Utilities
    Installing this release of ESX Server 3.x also installs the VMware Service Console version of the VMware Consolidated Backup command-line utilities. These replacements for the vmres and vmsnap scripts are provided with this release as a technology preview as sample, not fully supported, code.

    A description of how to use these utilities to back up and restore virtual machines on ESX Server 3.x is provided in the "Backing Up and Restoring Virtual Machines in ESX Server 3.0" technical note. You can also use these utilities to restore virtual machines onto an ESX Server 3.x that have been backed up using vmres on ESX Server 2.5.x. This is described in the "Restoring ESX Server 2.5.x Virtual Machines in ESX Server 3.0" technical note.

  • Potential Scalability Bottlenecks Have Been Removed
    In ESX Server 2.x, one vmx process per running virtual machine ran in the service console to implement certain virtual machine functionality. In ESX Server 3.x, these processes are no longer bound to the service console but instead are distributed across a server's physical CPUs.

  • Deprecated Command-Line Interfaces and New Scripting Interfaces
    With this release, all functionality is available through the VI Client. While you can still run commands, VMware strongly recommends that you do not do so. If you want a scripted behavior, VMware advises you to use the SDK scripting APIs instead of the command line. If you must use the command line, keep in mind that:

    • Commands are likely to change from release to release.
    • Certain operations require an ESX Server reboot for things to function properly, for example, before the VI Client and other management tools become aware of the changes.

  • ESX Server Scripted Installation Support
    ESX Server scripted installation mechanisms previously found in the Web-based management user interface are included in this release as part of the Virtual Infrastructure Web Access application. The files generated through this utility can be used to perform unattended installations of ESX Server through a bootable floppy or third-party remote provisioning tools.

  • Third-Party Management Software
    Do not use older versions of third-party management software from server OEM partners as they are not compatible with this release. VMware advises against even attempting to run them.

  • Third-Party Software and the new VMware Service Console Firewall
    You might want to run software within the service console. Although this is generally discouraged, it is possible. With ESX Server 3.x, any such software that uses networking connectivity to succeed might not work properly until ports in the built-in service console firewall are explicitly opened. Here is a list of some of the third-party products that require you to open additional ports in the service console firewall:

Other Changes to VirtualCenter

VirtualCenter 2.x improvements are listed below.

  • Default Power Operation is Hard Power Off
    Right-clicking a virtual machine in the VI Client shows separate power off and shutdown options. Power off, sometimes called hard power off, is analogous to pulling the power cable on a physical machine and always works. Shut down or soft power off leverages VMware tools to perform a graceful shutdown of a guest operating system. In certain situations, such as when tools is not installed or the guest operating system is hung, shutdown might not succeed. The VI Client has a red power button. In this release, the default action of this power button is hard power off. If you want to perform a graceful shutdown of a guest, either use the right click option or shutdown the operating system directly from inside the guest. Alternatively, the behavior of the power button can be changed on a per-virtual machine basis by clicking Edit Settings and selecting VMware Tools under the Options tab.

  • VirtualCenter Inventory
    VirtualCenter 2.x introduces a streamlined inventory model. In previous versions of VirtualCenter, the host and virtual machine inventory was rigidly structured around farms, farm groups, and virtual machine groups, all within a single hierarchy. In the new version, the notion of a farm has been replaced by that of a datacenter, and the inventory contains greater flexibility for organizing objects into folders. Within each datacenter, two alternative hierarchical views can be selected and modified to view the inventory according to the physical host and resource pool groupings or to view the inventory by virtual machine groupings. Within each datacenter, networks and datastores are now also primary entities that can be viewed within the inventory. The folders that make up the host and virtual machine hierarchies can be used for organization and assigning permissions, but they do not place any limits on VMotion, which can be used between any two compatible hosts within the same datacenter.

  • Virtual Machine Templates
    Creating templates, the ability to designate specific virtual machines as golden images from which to deploy new virtual machines, has been redesigned to fit into the new inventory model and updated to address a real world requirement: the need to periodically power on virtual machine templates to keep them updated with the most recent operating systems and application patches. Instead of tracking virtual machine templates in a completely separate inventory, VirtualCenter 2.x unifies the templates into the main inventory with other virtual machines but identifies them as special relative to the other virtual machines by a different icon and by the ability to prevent them from powering on. As such, templates can now be:

    • Viewed from the Virtual Machines and Templates or the Hosts and Clusters inventory views.
    • Quickly converted back and forth between virtual machines that can power on or be updated and templates that cannot be powered on but can be used as the source images from which to deploy new virtual machines.
    • Stored in monolithic (runnable) virtual disk format for quick template to virtual machine conversions or stored in sparse (non-runnable) virtual disk format to conserve storage space.

    Virtual machine templates stored on the VirtualCenter Management Server are no longer supported in VirtualCenter 2.x. They must reside on an ESX Server datastore; however, an upgrade wizard is provided to upgrade and keep all pre-existing VirtualCenter 1.x templates. ESX Server's added support for NAS shared storage can also be used as an effective replacement for the template repository previously maintained on the VirtualCenter Server.

  • Performance Charts
    Performance charts in the VI Client have been redesigned to include more detailed data previously only visible in other tools, such as vmkusage and esxtop, and to provide greater flexibility in customizing the displays. Each level of the inventory hierarchy allows objects and the various performance metrics associated with them to be selected or deselected for viewing in real-time or across a specified time-interval. Real-time performance statistics are a notable addition. They allow charts to display more detailed information at a 20-second sampling rate. Unlike the historical data, real-time statistics are cached on the managed servers for only two hours and not saved in the database.

  • Permissions and Authentication
    VirtualCenter 2.x introduces support for fine-grained permissions and user-defined roles. There is an extensive set of capabilities that can be added or subtracted from predefined roles or custom-defined roles. Users can be assigned to specific roles to restrict access to the inventory and to capabilities on that inventory.

  • Virtual Machine Migrations and VMotion
    Virtual machine migrations while powered off (cold migrations) are all fully operational and enable migrations between two ESX Server 3.x hosts or between ESX Server 3.x and ESX Server 2.x hosts. Some of the changes and improvements to the cold-migration functionality are as follows:

    • ESX Server's added support for iSCSI and NAS as shared storage enables VirtualCenter to perform virtual machine cold-migrations across iSCSI and NAS storage accessed by ESX Server 3.x.
    • Cold migration can be performed within the same host to relocate a virtual machine's disk files to different datastores.
    • Cold migrations prompt for the destination datastore of virtual machine disk files, and an advanced option allows different destinations to be selected for each disk.

    Note: To identify CPU characteristics with respect to 64-bit support and SSE3/NX VMotion compatibility, use the CPU identification tools included on the ESX Server 3.x product ISO.

    Virtual machine migrations while powered on (VMotion) are also all fully operational and enable migrations between two ESX Server 3.0 hosts or between two ESX Server 2.x hosts. Other VMotion migration scenarios are also supported subject to the following considerations:

    • For ESX Server 3.0, migrations with VMotion between an ESX Server 2.x host and an ESX Server 3.0 host are supported. However, because ESX Server 3.0 requires VMFS-3 and ESX Server 2.x requires VMFS-2, the migrated virtual machines are read-only. This type of VMotion migration is primarily useful if you want to preserve a virtual machine while you upgrade the host on which it resides. For more information, see the Installation and Upgrade Guide.

    • For ESX Server 3.0.1 and later, migrations with VMotion from an ESX Server 2.x host to an ESX Server 3.0.1 host are supported. Migration with VMotion from an ESX Server 2.x host to an ESX Server 3.0.1 host can also be used to transfer the virtual machine disk files from a VMFS-2 to a VMFS-3 datastore. See the Installation and Upgrade Guide for more details.

    • Migrations with VMotion from an ESX Server 3.x host to an ESX Server 2.x host are not supported.

    • You need to plan your VMotion migrations so that you only migrate between hosts that have compatible CPUs. For information on CPU compatibility, see the following Knowledge Base articles:

    ESX Server's added support for AMD No eXecute (NX) and Intel eXecute Disable (XD) technology also introduces a few changes in VirtualCenter 2.x regarding VMotion compatibility. In ESX Server 3.x, the NX/XD CPU features previously hidden to virtual machines are exposed by default to virtual machines of specific guest operating system types that can use the CPU features for improved security. Exposing these new CPU features to virtual machines, however, means trading off some VMotion compatibility for security, and hosts previously VMotion-compatible in ESX Server 2.x might become incompatible after an upgrade to ESX Server 3.x if they are NX/XD mismatched. To provide more flexibility in dealing with CPU features that affect VMotion compatibility and to provide a mechanism to restore VMotion compatibility back to VirtualCenter 1.x and ESX Server 2.x when desired, VirtualCenter 2.x allows customization at the per-virtual machine and per-host levels instead of top-down, VirtualCenter-wide masks:

    • Per-virtual machine compatibility masks allow individual virtual machines to take advantage of advanced CPU features or to hide advanced CPU features to maintain VMotion compatibility.
    • Per-host settings will also provide a way to set default configurations for all the virtual machines of a particular guest operating system type at once (but not available in this release).

    For additional details, refer to the VMotion chapter of the Basic Systems Administration Guide and the Additional Migration Upgrade Scenarios chapter of the Installation and Upgrade Guide.

  • Alarms
    Alarms in VirtualCenter 2.x include range tolerances to trigger an alarm only after exceptional conditions have escalated beyond a predefined range and time frequencies to trigger an alarm only after a specified time has elapsed.

  • Events
    Most operations performed through VirtualCenter lead to the creation of an event. In VirtualCenter 2.x, these events include information about who performed the action and when. VirtualCenter supports filtering events based on the user who caused the event or the event type and exporting events to a text file.