What's New and Improved in This Release
New Features at the Platform Level
ESX Server 3.0 new features are listed below.
- NAS and iSCSI
ESX Server 2 could store virtual machines only on SCSI disks and on Fibre Channel SANs. ESX Server 3.0 can store virtual machines on NAS and iSCSI as well, providing the benefits of Virtual Infrastructure to low-cost storage configurations. iSCSI LUNs, like Fibre Channel LUNs, can be formatted with the VMware file system (VMFS). Each virtual machine will reside in a single directory. Network attached storage (NAS) appliances must present file systems over the NFS protocol for ESX Server to be able to use them. NFS mounts are used like VMFS with the ESX Server creating one directory for each virtual machine.
Note: To use NFS or iSCSI, we highly recommend that you read the appropriate chapters in the "Server Configuration Guide." Many non-obvious and essential configuration steps are explained there.
iSCSI enabled through a software initiator (100% implemented as a software layer over TCP/IP) is fully supported in this release. iSCSI enabled through a hardware initiator (through a physical hardware iSCSI card) is also possible, but only supported "experimentally" in this release.
See the Known Issues section for additional details.
- 4-way Virtual SMP and 16 GB Memory Available to Guest Operating Systems
Virtual machines can now have up to 4 processors (up from 2) and 16 GB of RAM (up from 3.6 GB) allocated to them.
Note: Virtual machines with Linux "hugemem" kernels are not supported. Please see the "Systems Compatibility Guide" document for details.
- 64-bit Guest Operating System Support
With the appropriate server hardware, some 64-bit operating systems can run as experimental guests inside the virtual machines on ESX Server 3.0.
Targeted Operating Systems
The following 64-bit operating systems are expected to be experimentally supported at GA for ESX Server 3.0 and VirtualCenter 2.0:
- Windows 2003 Server Standard and Enterprise
- Red Hat Enterprise Linux (RHEL) 3 and 4
- Suse Linux Enterprise Server (SLES) 9
- Solaris 10 Update 1
- Windows Longhorn Beta
Definition of Experimental Support
Experimental guests have five key properties that distinguish them from fully supported guest operating systems.
- The level of stability and performance for experimental guests is expected to be usable for test and development environments but is not sufficient to be used in production.
- A subset of the typical guest features and operations, including those provided through VMware Tools, are available and vary by guest.
- The support policies for experimental guests and for fully supported guests
differ greatly. Details on VMware's support policies for experimental features
are located at:
There are specific hardware requirements for 64-bit guest operating system support. For AMD Opteron-based systems, the processors must be Opteron Rev E and later. For Intel Xeon-based systems, the processors must include support for Intel's Virtualization Technology (VT). Note that many servers that include CPUs with VT support might ship with VT disabled by default, and VT must be enabled manually. You might also need to contact your vendor to request a BIOS version that allows you to enable VT support if your CPUs do support VT, but you do not see this option in the BIOS.
For the RC 2 release, you should not run 64-bit guests in an environment/cluster where they could be migrated to host that does not meet the hardware requirements for 64-bit guests.
To determine whether your server has the necessary support, you can use a CPU Compatibility Tool included on the ESX Server product ISO.
See the Known Issues section for additional details.
- Creating 64-bit Virtual Machines
By default, 64-bit guests are now visible in the Virtual Infrastructure Client interface and marked as "experimental."
- NX/XD CPU Security Features
With the appropriate server hardware, the more recent guest operating systems are able to leverage AMD's No eXecute (NX) or Intel's eXecute Disable (XD) technologies. Both variants (available in most of the recent CPUs from Intel and AMD) improve security by marking memory pages as data only to prevent malicious software exploits and buffer overflow attacks. In ESX Server 2.x, these CPU features were hidden from virtual machines and unavailable for use by the virtual machines. In ESX Server 3.0, the NX/XD features are now exposed by default. The following guest operating system types are known to make use of NX/ND: Windows Server 2003 (SP1), Windows XP (SP2), Windows Vista, RHEL4, RHEL 3 (Update 3), SUSE 10, SUSE Linux 9.2, and Solaris 10.
- Remote CD/Floppy
Using either the Virtual Infrastructure Client or Virtual Infrastructure Web Access, it is now possible to give a virtual machine access to a CD or floppy device from the client's machine. This means, for example, that a user could install a program in a virtual machine running on a remote ESX Server by putting a CD in a drive on a desktop or laptop machine.
- Hot-Add Virtual Disk
ESX Server 3.0 supports adding new virtual disks to a virtual machine while it is running. This is useful with guest operating systems capable of recognizing hot-add hardware.
- Raw Device Mapping (RDM)
It is now possible for an ESX Server host to both utilize SAN-based RDMs and boot from the SAN, using the same Fibre Channel HBA. For example, this is useful if you are using array-based snapshotting or replication on LUNs attached to a boot-from-FC-SAN esx.
- Official Support For 32-bit Solaris 10 Update 1 as a Guest Operating System
With this release, we are introducing official support for 32-bit Solaris 10 Update 1 for x86 platforms. VMware Tools are available starting with this release, so we encourage you to fully explore the functionality of Solaris guests, including but not limited to Virtual SMP, >4GB memory, VMotion, VMware Tools' capabilities, and so on. Please refer to the Guest Operating System Installation Guide for Solaris Operating System guest operating system specific installation instructions. Also, see the known issues section for other notes.
- Upgrading is Now Supported in This Release
This release supports upgrading from ESX Server 2 and Virtual Center 1 to this release. Detailed upgrade instructions and guidelines are provided in the Installation and Upgrade Guide.
Special instructions and known issues for upgrading from earlier Beta versions to this release are described in Known Issues with Upgrades
- Automated upgrades of VMware Tools
ESX Server 3.0 and VirtualCenter 2.0 introduce the ability to automate the installation and upgrade of VMware tools for many virtual machines at the same time, without needing to interact with each virtual machine. Detailed instructions are provided in the Installation and Upgrade Guide.
- New Guest SDK available
The VMware Guest API provides hooks that management agents and other software running in the guest operating system in a VMware ESX Server 3.0 virtual machine can use to collect certain data about the state and performance of the virtual machine. The Guest SDK includes header files for the Guest API, complete documentation and ample code. The Guest SDK package can be downloaded from the "Download" section of the ESX Server 3.0/VirtualCenter 2.0 beta site.
- Experimental VMware Descheduled Time Accounting Component (VMDesched)
Starting with ESX Server 3.0, the new, experimental VMware Descheduled Time Accounting component (VMDesched) is provided as an optional part of VMware Tools. In the current release, VMDesched is available only for uniprocessor Windows and Linux guest operating systems.
Refer to the technical note Improving Guest OS Accounting for Descheduled Virtual Machines to learn how to install and monitor VMDesched on Linux and Windows guest operating systems.
- Support for Guest ACPI S1 Sleep Allows You to Wake up a Sleeping Virtual
VMware Tools provides support for guest operating systems that enable ACPI S1 sleep. This feature requires you to have the latest version of VMware Tools installed.
New Features in VirtualCenter
VirtualCenter 2.0 new features are listed below.
- Virtual Infrastructure Client
In VirtualCenter 2.0/ESX Server 3.0, the VirtualCenter Client has been renamed the Virtual Infrastructure Client, or VI Client to convey its ability to connect to a VirtualCenter Management Server or to individual ESX Servers. When connected to VirtualCenter 2.0, the VI Client provides full monitoring and management of multi-host configurations and multi-host functionality, such as VMotion, DRS, and HA. When connected to an individual ESX Server 3.0, the VI Client provides the subset of functionality needed to manage the virtual machines and configurations of single hosts. In both cases, the VI Client contains all of the functionality previously available through the separate Web-based Management User Interface (MUI) in ESX Server Server 2.x and now provides a single, consistent interface to configure ESX Server 3.0.
- Virtual Infrastructure Web Access
Virtual Infrastructure Web Access is a lightweight, browser-based application for managing and accessing virtual machines. The initial version included with VirtualCenter 2.0 and ESX Server 3.0 is not a full peer to the VI Client interface, but it does contain all of the functionality needed to interact with virtual machines, including virtual machine configuration and the ability to interact with a remote virtual machine's mouse, keyboard and screen entirely through a standard Web browser. Because of its zero client installation overhead, VI Web Access is ideally suited for users that only need to interact with a few virtual machines and don't need all of the advanced functionality provided by the VI Client. VI Web Access also allows VirtualCenter/ESX Server environment administrators to create Web links that can be shared with the users needing to access the virtual machines and allows the user interfaces presented upon login to be customized for different users.
See the VMware VI Web Access Administrator's Guide for additional details.
- Topology Maps
VI Clients connected to VirtualCenter now contain graphical topology maps that display the relationships between objects in the inventory. These maps can be used to visually discern high load areas, see a virtual machine's VMotion candidate hosts, plan general datacenter management, and can also be exported.
ESX Server 3.0 and VirtualCenter 2.0 introduce new licensing mechanisms based on industry standard FlexNet mechanisms. The first option is called "License Server Based" licensing and is intended to simplify license management in large, dynamic environments by allowing licenses to be managed and tracked by a license server. The second option is called "Host Based" licensing and is intended for smaller environments or customers preferring to keep ESX Servers decoupled from a license server.
"License Server based" license files can be used to unify and simplify the license management of many separate VirtualCenter and ESX Server licenses by creating a pool of licenses centrally managed by a license server. Licenses for the VirtualCenter Management Server and for the VirtualCenter add-on features like the VirtualCenter Management Agent, VMotion, VMware HA, and VMware DRS are only available in this form, which has the following advantages:
- Instead of maintaining individual serial numbers on every host and tracking all of them manually, the license server allows all licenses available and being used to be administered from a single location.
- Future license purchasing decisions are simplified because the new licenses available might be allocated and re-allocated using any combination of ESX Server form factors. The same 32 processors worth of licenses, for instance, could be used for sixteen 2-ways, eight 4-ways, four 8-ways, or two 16-ways (or any combination thereof totaling 32 processors).
- Ongoing license management is also simplified by allowing licenses to be assigned and re-assigned on an as-needed basis as the needs of an environment change, such as when hosts are added/removed or premium features like VMotion/DRS/HA need to be transferred to different hosts.
- During periods of license server unavailability, VirtualCenter and ESX Servers using served licenses will be unaffected by relying on cached licensing configurations for the duration of a 14-day grace-period (even across reboots).
"Host Based" license files are not centrally managed and not dynamically allocated but might be placed directly on individual ESX Server hosts. They provide an alternative use similar to the way in which serial numbers were used by ESX Server 2.x. Only the licenses for ESX Server and VMware Consolidated backup are available in this alternative form, which has the following benefits:
- Unlike license server based license files, the host based variety has the benefit of not requiring a license server to be installed for ESX Server only environments.
- Host-based license files are completely independent from a license server, allowing ESX Server licenses to be added or modified during periods of license server unavailability.
License Server Based AND Host Based license files are being made available for download, but VMware encourages the use of the license server based variety for usability and manageability. Since VirtualCenter installs a license server locally by default and its add-on features are required to use the license server based variety, the easiest way to get up and running quickly is to:
- Download license server based files using the "activation codes" emailed as part of the beta participation announcements. Please note that the activation codes must be used to download license files, and cannot be used as licenses.
- Install VirtualCenter and accept the defaults that will automatically set up a
VMware license server on the same system as the VirtualCenter server, providing the
license server based license files downloaded in step 1 during the installation process.
At this point, a license server will be installed with VirtualCenter and will be configured with an inventory of licenses, but individual ESX Servers being managed will still need licenses assigned to them.
- Start the VI Client interface, and connect to the VirtualCenter Management
Server and add ESX Servers to the VirtualCenter inventory (or connect directly to
an ESX Server) and use the VI Client to assign licenses to each ESX Server under management:
- Select an ESX Server from the inventory, navigate to the "Configuration" tab and click on the "Licensed Features" link.
- Choose a license source. By default, ESX Servers are automatically configured to obtain licenses from the VMware License Server set up with VirtualCenter, but a host based license file can be provided as the license source instead.
- Choose an ESX Server license type. Enable "ESX Server Standard" as the choice
of ESX Server Edition.
By default, ESX Servers start in an unlicensed state that allows configuration but prevents virtual machines from being powered on until an ESX Server license type has been acquired. ESX Server "Starter" licenses will have limited production oriented features (like the ESX Server edition included with VMTN subscriptions) and are not being provided in this release.
- Choose ESX Server feature add-ons to enable. Virtual SMP and VMware Consolidated Backup are ESX Server features that require additional licenses assigned to hosts. The VirtualCenter Management Agent, VMotion, VMware DRS, and VMware HA licenses are implicitly assigned to hosts when they are added to the VirtualCenter inventory or added to clusters that are DRS/HA enabled.
- Review the total number of licenses consumed and available on the license server through the "License Viewer" portion of the VirtualCenter Administrator's tab.
Alternative configurations for ESX Server are to acquire the licenses from a license server installed separately from VirtualCenter, or directly from a host based license file. Within the same environment, both kinds of configurations can be used for different ESX Servers, but individual ESX Servers must be configured to use one or the other.
The license server based files provided during beta will enables one VirtualCenter Management Server and up to 32 processors worth of ESX Server and VirtualCenter features. The host-based license file being provided as an alternative will enables up to 16 processors worth of ESX Server features. For beta testing purposes, the same host-based license file can be re-used on multiple ESX Servers.
- Administrator and Active Sessions User Interface
The VI Client has a new tab for administration-related tasks, such as managing roles and permissions, monitoring active user sessions, and reviewing license usage. The sessions UI provides controls for broadcasting a "message of the day," viewing and managing all of the active users, and terminating user sessions.
VirtualCenter 2.0 introduces the notion of a cluster of ESX Server hosts. A cluster is a collection of hosts that can, to a certain extent, be managed as a single entity. In particular, the resources from all the hosts in a cluster are aggregated into a single pool. Thus, from the resource management perspective, a cluster looks like a stand-alone host, but it would typically have a lot more resources available. Some of the key new technologies that make clusters powerful are: VMware DRS, resource pools, and VMware HA. These are described below.
- VMware DRS
VMware DRS is the technology that allows the resources from all the hosts in a cluster to be treated as a single aggregated pool. When changes occur in the environment, DRS can tune the resource scheduling on individual hosts as well as use VMotion to rebalance workload across hosts in the cluster. When a virtual machine is powered on, DRS calculates the optimal host on which to start it, given current resource levels and the resource configuration of the new virtual machine.
- Resource Pools
A resource pool provides a way of subdividing the resources of a stand-alone host or a cluster into smaller pools. A resource pool is configured with a set of CPU and memory resources that are shared by the virtual machines that run in the resource pool. A typical use of resource pools is to delegate control over a precisely specified set of resources to a group or individual without giving access to the underlying physical environment. Resource pools can be nested. For example, a large resource pool could be controlled by an engineering organization out of which smaller resource pools are given to individual developers.
- VMware HA
VMware High Availability (HA) increases the availability of virtual machines by detecting host failures and automatically restarting virtual machines on other available hosts. HA operates on a set of ESX Server 3.0 hosts that have been grouped into a cluster with HA enabled. Note that enabling HA requires almost no configuration. To get the full benefit of HA, the virtual machines that run on the cluster should be placed on shared storage that is accessible to all the hosts in the cluster.
- Database Support
The following databases are supported in this VirtualCenter Server release.
- SQL Server 2000 (SP1 and above)
- MSDE Rel A (bundled with VirtualCenter for demo/eval purposes)
- Oracle 9iR2
- Oracle 10gR1 (versions 10.1.0.3 and higher only)
- Oracle 10gR2 (all versions)
Other Changes at the Platform Level
ESX Server 3.0 improvements are listed below.
- Simpler, More Flexible Device Management
For networking: The management of networking for the VMware Service Console has been significantly improved and reworked in ESX Server 3.0. The VMware Service Console now enjoys easy load-balancing and failover of network and storage adapters and is managed through the same graphical user interface (VI Client) as virtual machines. Servers with limited numbers of NICs (such as blade servers) will enjoy particular benefits in simplicity and flexibility.
In ESX Server 2, a virtual machine might have had a single "virtual NIC" with two physical NICs "behind" it in a load-balanced and failover configuration through the use of a virtual switch and the VMware Service Console behaved differently: it directly saw all the NICs that were dedicated to it. That is, there was no virtual NIC presented to the VMware Service Console and no virtual switch behind the virtual NIC. Only virtual machines had these. With ESX Server 3.0, the VMware Service Console behaves and is configured and managed just like virtual machines. Now, just as with virtual machines, you use the VI Client to connect up a service console to a virtual switch and then to physical NICs.
In ESX Server 2, NICs assigned to the VMware Service Console, through the installer or the vmkpcidivvy command line, were directly dedicated to the service console while NICs for virtual machines were dedicated to the VMkernel. Furthermore, to manage the service sonsole's NICs, you used the command line, and to manage the virtual machine's NICs you used the management user interface. In ESX Server 3.0, all NICs are dedicated to the VMkernel and are managed via the VI Client. There's no longer any complex, install-time assignment of NICs to either the VMware Service Console or to virtual machines.
This new approach brings special benefit to servers with a very limited number of physical NICs, such as blade servers. They can now enjoy improved flexibility, throughput, and failover behavior. Specifically, if a server has only two NICs, it is now possible to create a virtual switch that teams both NICs (thus achieving load-balancing and failover) and then operates all of ESX Server's services (VMware Service Console, VMotion, NFS, iSCSI, and virtual machine KVM) through this single virtual switch and team of physical NICs.
For those who use the optional command-line interface through the VMware Service Console, this change in networking management also affects the output of the ifconfig command. In ESX Server Server 2, running "ifconfig" would display all the physical NICs that were assigned to the VMware Service Console (eth0, eth1, and so on). In ESX Server 3.0, when running ifconfig you'll no longer see eth0. Instead, you'll see two items:
- vmnic0 — This represents properties of the physical NIC. You'll see a MAC address, transmit and receive statistics, and other such information associated with this physical device.
- vswif0 — This represents the virtual network interface in the VMware Service Console. You'll see similar items of information for vmnic0 (a MAC address, transmit and receive statistics, and so on) as well as Layer 3 information for the VMware Service Console (IP address, netmask, and so on).
For Storage (Particularly FC HBAs):
Fibre Channel HBAs are also seeing some improvements in the way they are managed. The major benefit is that, in ESX Server 3.0, the VMware Service Console enjoys the same path-failover behavior (multipathing) that is available to virtual machines. Particularly for ESX Servers that boot from the SAN, this is a significant improvement.
In ESX Server 2, only one HBA could be shared effectively with the VMware Service Console. In ESX Server 3.0, all HBAs are handled by the VMkernel, multipathing occurs within the VMkernel, and the VMware Service Console and virtual machines enjoy the benefits of the multipathing. Also, there's no longer any complex, install-time assignment of HBAs to either the VMware Service Console or to virtual machines.
For Networking and Storage:
The VMware Service Console command-line interface vmkdivvy is no longer required. If some configuration of the NIC or HBA devices is necessary, it can be handled through the VI Client or through the SDK scripting interfaces and is not provided by the VMware Service Console. Any scripts from ESX Server 2 which attempted to manage and configure such devices using the VMware Service Console's command-line interface are likely not to work in ESX Server 3.0.
- The VMkernel IP Networking Stack Has Been Extended
The IP networking stack within ESX Server's VMkernel has been extended and now provides IP services to handle the following upper-level functions:
- Storing virtual machines on iSCSI LUNs (new)
- Storing virtual machines on NFS file servers
- Reading ISOs file from NFS servers in order to present them to virtual machines as CDs, for example. for installing guest operating systems
For instructions on how to configure these, see the Server Configuration Guide.
- New Version of the VMware File System: VMFS 3
There is a new generation of VMFS in ESX Server 3.0. Scalability, performance, and reliability have all improved. Furthermore, subdirectories are now supported. ESX Server Server will create a directory for each virtual machine and all its component files.
- Swap Files and VMX Files Are Now On The Virtual Machine Datastore, For Example
the SAN-Based VMFS
When there's insufficient physical memory to handle the needs of all the running virtual machines, ESX Server swaps a virtual machine out to disk. ESX Server 3.0 has one swap file per virtual machine. In ESX Server 3.0, the swap file for each virtual machine is now located on the virtual machine datastore volume, for example VMFS or NAS, in the same directory as the virtual machine's configuration file (the VMX file) and NVRAM file. Having all essential pieces of the virtual machine on the datastore increases reliability and is essential for features such as DRS and HA.
Make sure that you allocate enough space on your VMFS or NAS volume to handle the additional swap files. If a virtual machine has 500MB of memory assigned to it, the swap file will have a default size of 500MB. This swap file is created when a virtual machine is powered on.
If you are low on disk space, you might not be able to power on virtual machines, particularly large-memory virtual machines. The swap space required to power on a virtual machine is equal to its memory size minus its memory reservation. For example, a 16 GB virtual machine with an 4GB memory reservation needs 4GB of swap space. A virtual machine with 16 GB of memory reservation needs no swap space.
- SAN Configuration Persistent Binding Functionality is no Longer Required
In ESX Server 2, the management user interface showed three tabs for VMFS, persistent binding, and multipathing. In ESX Server 3.0, the VI Client, does not have a persistent binding tab.
The persistent binding function is no longer needed. Prior to ESX Server 2.5, the pbind tool was used because ESX Server kept track of LUNs, using their vmhbaC:T:L:P paths. RDM technology, introduced in ESX Server 2.5, used a volume's internal, unique SCSI identifiers. Because ESX Server 3.0 uses those internal identifiers for LUNs containing VMFS volumes and the ESX Server boot disks, the persistent binding function is no longer required.
- Snapshots Are Manageable With the VI Client
Snapshots now capture the entire virtual machine state, such as all disks, plus memory, processor, and other virtual machine state.
ESX Server 2 supported undoable, non-persistent, and append mode disks gave the ability to revert to a known disk state. In ESX Server 3.0, snapshotting extends this functionality to include memory and processor state as well. Additionally, snapshots are configured on an entire virtual machine and no longer are per-disk. A per-disk behavior can be achieved by using an "independent disk" configuration option for each virtual machine.
- Vmres/virtual machinesnap Scripts (Released in ESX Server 2.5) Are Superceded by the
VMware Service Console Version of VMware Consolidated Backup Command-Line Utilities
Installing this release of ESX Server 3.0 also installs the service console version of the VMware Consolidated Backup command-line utilities. These tools are provided with this release as a technology preview and are not fully supported in the final release.
A description of how to use these utilities to back up and restore virtual machines on ESX Server 3.0 is provided in the Backing up and Restoring Virtual Machines in ESX Server 3.0 technical note. You can also use these utilities to restore virtual machines onto an ESX Server 3.0 that have been backed up using Vmres on ESX Server 2.5.x. This is described in the Restoring ESX Server 2.5.x Virtual Machines in ESX Server 3.0 technical note.
- Potential Scalability Bottlenecks Have Been Removed
In ESX Server 2, one vmx process per running virtual machine ran in the service console to implement certain virtual machine funtionality. In ESX Server 3.0, these processes are no longer bound to the service console but instead distributed across a server's physical CPUs.
- Deprecated Command-Line Interfaces and New Scripting Interfaces
With this release, all functionality is available through the VI Client. While you can still run commands, VMware strongly recommends that you do not do so. If you want a scripted behavior, we advise you to use the SDK scripting APIs instead of the command line. If you determine that you must use the command line, keep in mind that:
- Commands are likely to change from release to release.
- Certain operations require an ESX Server reboot for things to function properly, for example, before the VI Client and other management tools become aware of the changes.
- ESX Server Scripted Installation Support
ESX Server scripted installation mechanisms previously found in the Web-based management user interface are included in this release as part of the Virtual Infrastructure Web Access application. The files generated through this utility can be used to perform unattended installations of ESX Server through a bootable floppy or third-party remote provisioning tools.
- Third-Party Management Software
Compatible versions of third-party management software from server OEM partners are not yet available with this release. They are targeted to coincide with the general availability release of ESX Server 3.0. Older versions of such software are not compatible; we advise against even attempting to run them.
- Third-Party Software and the new VMware Service Console Firewall
Some users might try to run software within the ESX Server service console. Although this is generally discouraged, it is possible. With ESX Server 3.0, any such software that uses networking connectivity to succeed might not work properly until ports in the built-in service console firewall are explicitly opened. Here is a list of some of the third-party products that require you to open additional ports in the service console firewall:
- OEM Products
- IBM Director — http://www-03.ibm.com/servers/eserver/xseries/systems_management/ibm_director/
- HP Insight Manager — http://h18013.www1.hp.com/products/servers/management/hpsim/index.html
- Dell OpenManage — http://www1.us.dell.com/content/topics/global.aspx/solutions/en/openmanage?c=us&cs=555&l=en&s=biz
- Backup Agents
- VERITAS NetBackup — http://seer.support.veritas.com/docs/251630.htm
- IBM Tivoli Storage Manager — http://www.ibm.com/software/tivoli/products/storage-mgr
- EMC Legato NetWorker — www.legato.com/support/websupport/tech_bulletins/index.htm?includefile=388.html
- CommVault Galaxy Backup & Recovery — www.commvault.com/backup_and_recovery.asp
- EMC Navisphere
For Navisphere, you might need to open port 6380. For all other third-party products, see the vendor documentation to determine what ports you need to open. To learn how to open ports in the service console firewall using the esxcfg-firewall command, please see the "Security" chapter in the Server Configuration Guide.
Other Changes to VirtualCenter
VirtualCenter 2.0 improvements are listed below.
- Default Power Operation is 'Hard' Power Off
Right clicking on a virtual machine in the Virtual Infrastructure Client shows separate 'Power Off' and 'Shut Down' options. Power Off, sometimes called 'hard power off,' is analogous to pulling the power cable on a physical machine, and always works. 'Shut down' or 'soft power off,' on the other hand, leverages VMware tools to perform a graceful shutdown of a guest operating system. In certain situations, such as when tools is not installed or the guest operating system is hung, shut down might not succeed. The VI client also has a red power button, In this release, the default action of this power button is hard power off. If you wish to perform a graceful shutdown of a guest, either use the right click option or shutdown the operating system directly from inside the guest. Alternatively, the behavior of the power button can be changed on a per-virtual machinebasis by clicking on 'Edit Settings' and selecting 'VMware Tools' under the Options tab.
- VirtualCenter Inventory
VirtualCenter 2.0 introduces a streamlined inventory model. In previous versions of VirtualCenter, the host and virtual machine inventory was rigidly structured around farms, farm groups, and virtual machine groups - all within a single hierarchy. In the new version, the notion of a farm has been replaced by that of a datacenter, and the inventory contains greater flexibility for organizing objects into folders. Within each datacenter, two alternative hierarchical views can be selected and modified to view the inventory according to the physical host/resource pool groupings, or to view the inventory simply by virtual machine groupings instead. Within each datacenter, networks and datastores are now also primary entities that can be viewed within the inventory. The folders that make up the host and virtual machine hierarchies can be used for organization and assigning permissions, but they do not place any limits on VMotion, which can be used between any two compatible hosts within the same datacenter.
- Virtual Machine Templates
Templates (the ability to designate specific virtual machines as the golden images from which to deploy new virtual machines) have been redesigned to fit into the new inventory model and have been updated to address a real world requirement: the need to periodically power on virtual machine templates to keep them updated with the most recent operating systems and application patches. Instead of tracking virtual machine templates in a completely separate inventory, VirtualCenter 2.0 unifies the templates into the main inventory with other virtual machines but identifies them as special relative to the other virtual machines by a different icon and by the ability to prevent them from powering on. As such, templates can now be:
- Viewed from the "Virtual Machines and Templates" or the "Hosts and Clusters" inventory views.
- Quickly converted back and forth between virtual machines that can power on/be updated a nd templates that cannot be powered on, but can be used as the source images from which to deploy new virtual machines.
- Stored in monolithic (runnable) virtual disk format for quick template to virtual machine conversions or stored in sparse (non-runnable) virtual disk format to conserve storage space.
It is also worth noting that virtual machine templates stored on the VirtualCenter Management Server are no longer supported in VirtualCenter 2.0 (they must reside on an ESX Server datastore), but an upgrade wizard is provided to upgrade and keep all pre-existing VirtualCenter 1.x templates. ESX Server's added support for NAS shared storage can also be used as an effective replacement for the template repository previously maintained on the VirtualCenter Management Server.
- Performance Charts
Performance charts in the VI Client have been redesigned to include more detailed data previously only visible in other tools, such as vmkusage and esxtop, and to provide greater flexibility in customizing the displays. Each level of the inventory hierarchy allows objects and the various performance metrics associated with them to be selected/deselected for viewing in real-time or across a specified time-interval. Real-time performance s statistics are a notable addition. They allow charts to display more detailed information at a 20-second sampling rate. Unlike the historical data, real-time statistics are cached on the managed servers for only two hours and not saved in the database.
- Permissions and Authentication
VirtualCenter 2.0 introduces support for fine-grained permissions and user-defined roles. There is an extensive set of capabilities that can be added or subtracted from predefined roles or custom-defined roles. Users can be assigned to specific roles to restrict access to the inventory and to capabilities on that inventory.
- Virtual Machine Migrations and VMotion
Virtual Machine migrations while powered off (cold migrations) are all fully operational and enable migrations between two ESX Server 3.0 hosts or between ESX Server 3.0 and ESX Server 2.x hosts. Noteworthy changes and improvements to the cold-migration functionality are as follows:
- ESX Server's added support for iSCSI and NAS as shared storage enables VirtualCenter to perform virtual machine cold-migrations across iSCSI and NAS storage accessed by ESX Server 3.0.
- Cold migration can be performed within the same host to relocate a virtual machine's disk files to different datastores.
- Cold migrations prompt for the destination datastore of virtual machine disk files, and an advanced option allows different destinations to be selected for each disk.
Note: To identify CPU characteristics with respect to 64-bit support and SSE3/NX VMotion compatibility, use the CPU identification tools included on the ESX Server 3.0 product ISO.
Virtual Machine migrations while powered on (VMotion) are also all fully operational and enable migrations between two ESX Server 3.0 hosts or between two ESX Server 2.x hosts but are subject to two important limitations:
- Migrations with VMotion an ESX Server 2.x host and an ESX Server 3.0 host are currently
not supported because ESX Server 3.0 requires VMFS 3 and ESX Server 2.x requires VMFS 2.
- ESX Server's experimental support for 64-bit virtual machines also currently means that only 32-bit virtual machines are supported for migrations:
32-bit Guest Virtual Machines 64-bit Guest Virtual Machines VMotion (powered on) Fully supported within VMotion-compatible 32-bit CPUs AND 64-bit CPUs. Currently unsupported. Full support for 64-bit in the future will only allow migrations between supported 64-bit CPUs (Intel-to-Intel or AMD-to-AMD). Cold Migration (powered off) Fully supported within supported 32-bit CPUs and 64-bit CPUs and able to power on irrespective of any CPU incompatibilities. Currently unsupported. Full support for 64-bit in the future will allow migrations between all supported 64-bit CPUs (Intel-to-AMD OK).
- ESX Server's added support for AMD's No eXecute (NX) and Intel's eXecute Disable (XD)
technology also introduces a few changes in VirtualCenter 2.0 regarding VMotion compatibility.
In ESX Server 3.0, the NX/XD CPU features previously hidden to virtual machines is exposed by
default to virtual machines of specific guest operating system types that can use the CPU
features for improved security. Exposing these new CPU features to virtual machines, however,
also means trading off some VMotion compatibility for security, and hosts previously VMotion
compatible in ESX Server 2.x might become incompatible after an upgrade to ESX Server 3.0 if they
are NX/XD mismatched. To provide more flexibility in dealing with CPU features that affect
VMotion compatibility and to provide a mechanism to restore VMotion compatibility back to
VirtualCenter 1.x/ESX Server 2.x when desired, VirtualCenter 2.0 will allows customization
at the per-virtual machine and per-host levels instead of top-down, VirtualCenter-wide masks):
- Per-virtual machine compatibility masks allow individual virtual machines to take advantage of advanced CPU features or to hide advanced CPU features to maintain VMotion compatibility.
- Per-host settings will also provide a way to set default configurations for all the virtual machines of a particular guest operating system type at once (but not available in this release).
Alarms in VirtualCenter 2.0 include range tolerances to trigger an alarm only after exceptional conditions have escalated beyond a predefined range and time frequencies to trigger an alarm only after a specified time has elapsed.
Most operations performed through VirtualCenter lead to the creation of an event. In VirtualCenter 2.0, these events include information about who performed the action and when. VirtualCenter supports filtering events based on the user who caused the event or the event type and exporting events to a text file.
- OEM Products