VMware vSphere Storage Appliance 5.1.1 Release Notes
vSphere Storage Appliance Manager Installer 5.1.1 | 25 October 2012 | Build 859703
vSphere Storage Appliance Automated Installer 5.1.1 | 25 October 2012 | Build 859703
Standalone VSA Cluster Service 5.1.1 | 25 October 2012 | Windows Build 859644 | Linux Build 858549
Check frequently for additions and updates to these release notes.
Last updated: 2 April 2013
What's in the Release Notes
The release notes cover the following topics:
These release notes provide information about VMware vSphere Storage Appliance 5.1.1, a distributed shared storage solution for VMware vSphere 5.1.
VMware vSphere Storage Appliance 5.1.1 Features
VMware vSphere Storage Appliance 5.1.1 provides a distributed shared storage solution that abstracts the computing and internal hard disk resources of two or three ESXi hosts to form a VSA cluster. The VSA cluster enables vSphere High Availability and vSphere vMotion.
VSA Cluster Features
A VSA cluster provides the following set of features:
- Shared datastores for all hosts in the datacenter
- One replica of each shared datastore
- VMware vMotion and VMware High Availability
- Failover and failback capabilities from hardware and software failures
- Replacement of a failed VSA cluster member
Note: VSA cluster member is an ESXi host with a running vSphere Storage Appliance that participates in a VSA cluster.
- Recovery of management of an existing VSA cluster after a fatal vCenter Server failure
- Collection of VSA cluster logs
VSA Cluster Components
- vSphere Storage Appliance software
- Two or three identical physical servers running ESXi 5.0 or later
- VMware vCenter Server 5.0 or later
Note: vCenter Server can be installed on a separate physical server or as a virtual machine on one of the ESXi hosts.
What's New in vSphere Storage Appliance 5.1.1
vSphere Storage Appliance 5.1.1 includes many new features and enhancements that improve the management, performance, and security of VSA clusters.
- Support for multiple VSA clusters managed by a single vCenter Server instance.
- Ability to run vCenter Server on a subnet different from a VSA cluster.
- Support for vCenter Server run locally on one of the ESXi hosts in the VSA cluster.
- Ability to install VSA on existing ESXi hosts that have virtual machines running on their local datastores.
- Ability to run the VSA cluster service independently from a vCenter Server instance in the same subnet as the VSA cluster, installed on either Linux or Windows. The VSA cluster service is required for a cluster with two members. For information about how to install the service, see the Install and Configure the VSA Cluster Environment section in the VMware vSphere Storage Appliance Installation and Administration documentation.
- Improved supportability for VSA clusters in an offline state. VSA has improved its ability to identify issues that cause a cluster to be offline, and also provides an option to return a cluster to an online state.
- Ability to specify and increase the storage capacity of a VSA cluster.
- Security enhancements to properly secure communication channels among VSA components.
- Support of memory over-commitment for a VSA cluster that includes ESXi hosts 5.1. The VSA cluster that includes ESXi hosts with earlier ESXi 5.0 version does not support memory over-commitment.
- A new licensing model. In addition to being an add-on for vCenter Server, VSA can now have its own standalone license. Use the add-on license if you manage a single cluster. To manage multiple clusters, you need to obtain the standalone VSA license.
In addition, VSA 5.1.1 extends support for the following:
- RAID: For more information, see RAID Settings and Requirements.
- VMFS heap size is increased to 256MB. This, combined with support for larger drives, allows for a per-node VSA virtual storage capacity of 24TB. This capacity maximum applies in aggregate across all VMFS datastores created on the ESXi host.
vSphere Storage Appliance 5.1 has been removed from the VMware download site due to an issue encountered during installation. vSphere Storage Appliance 5.1 is replaced by vSphere Storage Appliance 5.1.1, which is functionally identical to vSphere Storage Appliance 5.1 and includes the fix for the issue encountered during installation.
For more information, see KB 2036630 and the Resolved Issues section of the release notes.
Updated If you have installed VSA 5.1, you cannot upgrade to VSA 5.1.1. VSA 5.1.1 addresses issues that are not needed for those already updated to VSA 5.1.
vSphere Storage Appliance supports the VSA Manager Installer wizard and the VSA Automated Installer script as installation workflows. The following table shows a comparison of the requirements for each workflow. Read the VMware vSphere Storage Appliance Installation and Administration documentation to learn more about each workflow.
||VSA Manager Installer
||VSA Automated Installer
|Updated Hardware Requirements
Two or three servers with homogenous hardware configuration
- CPU 64-bit x86 Intel or AMD processor, 2GHz or faster
- 6GB, minimum
- 24GB, recommended
- 72GB, maximum tested
Note: You can have more than 72GB memory per ESXi host as there is no memory limitation for the VSA cluster.
- Updated NIC 4 single-port, or 2 dual-port 1 Gigabit Ethernet NICs or 10 Gigabit Ethernet NICs, or 1 quad-port NIC (does not provide NIC redundancy)
- RAID controller RAID controller must support RAID10, RAID6, or RAID5
- Hard disks All disks that are used in each host must be of the same capacity and performance characteristics. Do not mix SATA and SAS disks. For possible configurations, see RAID Settings and Requirements. Although disk drive configurations with heterogeneous vendor and model combinations, drive capacity, and drive speed might work, the write I/O performance of the RAID adapter will be a multiple of the slowest drive in the RAID set, and the capacity will be a multiple of the smallest drive in the RAID set. VMware strongly recommends that you do not use hybrid disk drive configurations except where you must rebuild a RAID set after acquiring a replacement drive from the server manufacturer that differs only slightly. Substituting a smaller drive than the current minimum capacity drive will cause the RAID set rebuild to fail and is not supported.
vCenter Server 5.1 system on a physical or virtual machine. You can run vCenter Server on one of the
ESXi hosts in the VSA cluster. The following configuration is required for vCenter Server installation:
For VSA Manager, additional hard disk space is required:
- CPU 64-bit x86 Intel or AMD processor, 2GHz or faster
- Memory and hard disk space The amount of memory and hard disk space needed depends on your vCenter Server configuration. For more details, see the vSphere Installation and Setup documentation.
- NIC 1 Gigabit Ethernet NIC or 10 Gigabit Ethernet NIC
- VSA Manager 10GB
- VSA Cluster Service 2GB
|Network Hardware Requirements and Configuration
- Two 1 Gigabit/10 Gigabit Ethernet switches, recommended
Note: Network configuration with two switches eliminates a single point of failure in the physical network layer
- One 1 Gigabit/10 Gigabit Ethernet switch, minimum
- In a VSA environment, network must operate at speeds of 1 Gigabit or higher in order to support the configuration.
|Static IP addresses. vCenter Server and VSA Manager do not need to be in the same subnet as VSA clusters. Members of each VSA cluster, including the VSA cluster service for a 2-member configuration, need to be in the same subnet.
|(Optional) One or two VLAN IDs configured on the Ethernet switches
|ESXi 5.1 on each host
|Windows Server 2003 or Windows Server 2008 64-bit installation
|vCenter Server 5.1 on a physical system or a virtual machine. You can run vCenter Server on one of the
ESXi hosts in the VSA cluster.
|vSphere Client or vSphere Web Client
Read the VMware vSphere Storage Appliance Installation and Administration document to learn more about each workflow.
RAID Settings and Requirements
|Updated The following RAID configurations show a sample of valid combinations of number of drives and maximum drive capacities for SAS disks. Capacities smaller than those shown in this sample are also supported:
- 10 X 0.5T => 4.5T VMFS datastore
- 8 X 0.75T => 5.25T VMFS datastore
- 7 X 1T => 6T VMFS datastore
- 6 X 1.5T => 7.5T VMFS datastore
- 5 X 2T => 8T VMFS datastore
- 3 X 3T => 6T VMFS datastore
- 4 X 2.5T => 7.5T VMFS datastore
- 4 X 3T => 9T VMFS datastore
|Maximum supported VMFS datastore limit per host is 24T.
||Maximum supported VMFS datastore limit per host is 24T. This capacity maximum applies in aggregate across all VMFS datastores created on the ESXi host.
|Maximum supported VMFS datastore limit per host is 8T. This is not a VMFS datastore limit, it is the limit imposed by the expected aggregate disk drive resiliency for a RAID set. Beyond this limit, the storage resiliency is below the acceptable limit.
|Disk Rotational Speed
|At least 7200 RPM
|At least 10000 RPM
|Note: For best performance, select 15000 RPM disks.
The following issues have been resolved since the release of vSphere Storage Appliance 1.0. The list of resolved issues pertains to this release of vSphere Storage Appliance only.
- While upgrading from VSA 1.0 to VSA 5.1, if you run install.exe without any parameters, the existing VSA 1.0 clusters are deleted. This issue is resolved in this release.
- The AutoRun application in the VSA 5.1 ISO runs install.exe instead of the Installation and Administration Guide. This issue is resolved in this release. The AutoRun application in the VSA 5.1.1 ISO runs the Installation and Administration Guide.
- While upgrading from VSA 1.0 to VSA 5.1, if you run the cleanup.bat script with fewer than three parameters, the existing VSA 1.0 clusters are deleted.
This issue is resolved in this release. A warning message is now displayed stating that all VSA clusters will be deleted if you proceed.
- When you reboot a VSA node, a full synchronization occurs instead of an incremental synchronization. This issue is resolved in this release.
- Node replacement for a three-member cluster no longer requires a temporary vCenter Server standard license.
- When new hosts are added to an existing VSA datacenter, the Access Control List (ACL) is automatically updated and the NFS shares are automatically mounted on those hosts.
- Several changes have been made to reconfigure VSA cluster network workflow to improve user experience.
- VSA can now be installed on vCenter Server that has IPv6 enabled. However, VSA does not support IPv6. If VSA is installed on a vCenter Server that has IPv6 enabled, the VSA Manager and other components will still use and support only IPv4.
- ClusterOffline, StorageOffline, and MemberOffline events now generate the following alarms that are sent to the vCenter Server:
| VSA Storage Cluster in this Datacenter [Internal ID: datacenter-2] Offline
| VSA Storage Entity [Internal ID: datastore-111] Offline
| VSA Cluster Service in this Datacenter [Internal ID: datacenter-2] Offline or
VSA Member [Internal ID: vm-110] Offline
The known issues in this vSphere Storage Appliance release are grouped as follows:
- The Select Datacenter page of the VSA Installer displays an error
If any of the ESXi hosts in the datacenter that you select for the VSA cluster uses a distributed vSwitch, the VSA Installer displays the following error: java.security.InvalidParameterException : Invalid gateway: null. This issue occurs even when you do not intend to use the host with the distributed vSwitch for the VSA cluster.
All participating ESXi hosts must use standard vSwitch for management network. If your datacenter includes non-participating hosts that use distributed vSwitches, move these hosts to a different datacenter.
- VSA Manager installation fails with an error
When installing VSA Manager, you might see the following error message:
Port port_number is already in use. This error indicates that another process might be using the port required by VSA Manager.
- Use the netstat command to find the PID of the process using that port:
netstat -ano | findstr port number.
- Stop the process.
- When the process stops, use netstat again to ensure that the port is available before continuing with VSA Manager installation.
- VSA 5.1.1 installation fails with the Error 2896: Executing action failed message
This problem might occur when the location of the temp drive is set to a drive other than C:, where VSA Manager is to be installed.
Make sure that the user and system TEMP and TMP variables point to a specified location on the C: drive. For more information, see KB 2035893.
- Attempts to uninstall VSA Manager fail when vCenter Server is uninstalled first
If you have uninstalled vCenter Server from the system where both vCenter Server and VSA Manager run, you might not be able to uninstall VSA Manager.
If you need to uninstall vCenter Server, make sure to uninstall VSA Manager and other plug-ins first.
- Entering a license key that does not include VSA support results in an incorrect warning message
When you enter an incorrect license key on the License Information page of the VSA Manager installer, the following message appears:
The license of this vCenter Server and/or Virtual Storage Appliance has expired. Provide a valid license key to continue with the installation.
This message is incorrect because the entered license key has not expired. The message should indicate that the license key does not support VSA and, as a result, installation cannot proceed.
Enter the license key that includes the VSA support.
- You can have only one instance of the VSA cluster service per physical server
One instance of the VSA cluster service is always installed with VSA Manager. Use the installer for the VSA cluster service only when installing the VSA cluster service on a separate server without VSA Manager. For information about how to install the service, see the Install and Configure the VSA Cluster Environment section in the vSphere Storage Appliance Installation and Administration documentation.
- Updating your system without using the recommended upgrade order causes upgrade failures with vSphere Storage Appliance
If you do not follow the recommended order of component upgrades, the upgrades fail. Specifically, upgrading ESXi before vSphere Storage Appliance (VSA) causes the VSA upgrade to fail. Upgrading VSA before upgrading vCenter Server causes VSA to stop working when vCenter Server is upgraded, due to metadata and licensing changes. Follow the recommended order when upgrading your system. The following is the recommended order for upgrade:
- Upgrade vCenter Server from version 5.0 to 5.1.
- Upgrade the vSphere Storage Appliance from version 1.0 to 5.1.1.
- Enter cluster maintenance mode.
- Upgrade the ESXi hosts from version 5.0 to 5.1.
- Exit cluster maintenance mode.
- If you have already upgraded VSA to version 5.1.1 before upgrading vCenter Server, upgrade vCenter Server (selecting the Use the existing DB option) and then uninstall and reinstall VSA Manager.
- If you have already upgraded ESXi to version 5.1 before upgrading VSA, reinstall ESXi 5.0 on your hosts preserving the local VMFS datastore, and then restore certain configurations before you can upgrade VSA. For information about reinstalling ESXi 5.0 and restoring the appropriate configurations, see the VMware Knowledge Base article 2034424.
- Before upgrading to VSA 5.1.1, verify that the VSA cluster is up and functioning properly
When a VSA cluster is up and functioning properly, the VSA Manager GUI displays the status of all Appliances and Datastores as Online.
- After a failed upgrade to VSA 5.1.1, VMFS heap size remains set to 256MB
During the VSA upgrade, the heap size of a VMFS datastore is increased to 256MB. If the upgrade fails and VSA Manager reverts to its initial state, the heap size of the VMFS datastore does not change to the original value and remains set to 256MB.
Workaround: Manually reset the VMFS heap size to the original value.
- Attempts to upgrade VSA Manager fail with an error
You see the following error message: A cluster is not available. This failure might occur when you upgrade VSA Manager to version 5.1.1, but do not have a previously created VSA cluster.
Workaround: Uninstall the earlier version of VSA Manager before installing VSA Manager 5.1.1.
- After an upgrade to VSA 5.1.1, a VSA cluster might take longer to exit maintenance mode
When you upgrade from VSA 1.0 to VSA 5.1.1, the VSA cluster might take more than 20 minutes to exit the cluster maintenance mode.
- A failed VSA upgrade might leave an orphaned VSA virtual machine
If the VSA upgrade failure is caused by a lost communication with an ESXi host during or after virtual machine deployment, an orphaned virtual machine might be left on the datacenter after the upgrade rollback.
Workaround: Manually delete the orphaned virtual machines before you retry the upgrade.
- Upgrade to VSA 5.1.1 fails if a VSA cluster was recovered in VSA 1.0
If a VSA cluster was recovered in VSA 1.0, and the vApp properties were not manually restored on the VSA virtual machines, upgrade to VSA 5.1.1 might fail.
Workaround: Restore the vApp configuration and provide the correct networking properties. For details, see the VMware Knowledge Base article 2033916.
Interoperability with vSphere Issues
- vSphere Update Manager scan and remediation tasks fail on ESXi hosts that are part of a VSA cluster
When you perform scan and remediation tasks with vSphere Update Manager on ESXi hosts that are part of a VSA cluster, the tasks might fail.
Workaround: Before you perform scan and remediation tasks, place the VSA cluster member in maintenance mode.
- On the VSA Manager tab, click Appliances.
- In the Host column, right-click the ESXi host for which you want to perform scan and remediation tasks and select Enter Appliance Maintenance Mode.
- In the confirmation dialog box, click Yes.
The status of the VSA cluster member changes to Maintenance.
- In the Entering Maintenance Mode dialog box, click Close.
- Perform scan and remediation tasks on the ESXi host that accommodates the VSA virtual machine that is in maintenance mode.
- On the VSA Manager tab, click Appliances.
- Right-click the VSA cluster member that is in maintenance mode and select Exit Appliance Maintenance Mode.
- In the Exiting Maintenance Mode dialog box, click Close.
- Repeat the steps for each ESXi host for which you want to perform stage and remediation tasks.
- Storage vMotion task of a virtual machine fails when a vSphere Storage Appliance recovers from failure
If you use Storage vMotion to migrate a virtual machine while a vSphere Storage Appliance is recovering from a failure, the Storage vMotion process might fail. While the failed vSphere Storage Appliance is recovering, the migration process might become slow, and the vSphere Client might display the
Timed out waiting for migration data error message.
Workaround: After the vSphere Storage Appliance recovers from the failure, restart the Storage vMotion task.
- VSA datastores do not support virtual machines with Fault Tolerance
- I/O throughput to VSA datastores is slower when virtual machines perform disk writes whose sizes are not multiples of 4KB or are not aligned on a 4KB boundary
If an application is configured to perform disk writes that are not multiples of 4KB or are not aligned on a 4KB boundary, the I/O throughput on the VSA datastore that contains the virtual disks is affected by the need to read the contents of the data blocks before writing them to the VSA datastore.
Workaround: To avoid the issue, make sure that your configuration meets the following conditions:
- Disk partitions of the virtual machine start on a 4KB boundary
- Applications that are either bypassing a file system or writing directly to files are issuing I/O that is both aligned and sized to 4KB multiples
- If a VSA cluster member fails in a three-member VSA cluster, you can perform only up to two Storage vMotion tasks
If one of the VSA cluster members fails in a three-member VSA cluster, you cannot perform more than two Storage vMotion tasks between the VSA datastores. If you run three simultaneous Storage vMotion tasks, one of the tasks might time out.
Workaround: Do not run more than two Storage vMotion tasks in a VSA cluster.
- Reconfigure VSA Cluster Network wizard does not check for IP address conflicts in the back-end network
When you perform network reconfiguration of the VSA cluster with the Reconfigure VSA Cluster Network wizard, the wizard does not check for IP address conflicts in the back-end network. The VSA cluster back-end network uses IP addresses in the 192.168.x.x subnet.
Workaround: No workaround is available. You must ensure that the addresses you assign to the back-end network are not used by other hosts or devices.
- One of the VSA datastores appears as Degraded in the VSA Manager UI after backend path fault has been resolved
You might also see in the task list a datastore synchronization task that has not started or appears stalled.
This issue might occur when a both backend path fault occurs.
This fault is the result of a loss of network communication on all backend network interfaces. When the communication is lost for a considerable period of time, the cluster is placed in the degraded state.
Workaround: Reboot the VSA node that exports the degraded VSA datastore.
- Correction to the brownfield network configuration port group names in the documentation
The Network Configuration of the vSphere Storage Appliance documentation incorrectly defines the five port groups configured on each host as VSA Front End Network, VM Network, Management Network, VSA Back End Network, and VSA vMotion.
Entering the port group names that is shown in the documentation causes the installation to fail.
Use the correct port group names. They must be named exactly as shown.
- VSA-Front End
- VM Network
- Management Network
- VSA-Back End
- The Select When to Format the Disks page in the Online Help contains incorrect information
The Select When to Format the Disks page indicates that for ESXi 5.1 hosts, disks are automatically configured using the optimized eager zeroed format. This is incorrect because the format is not currently supported.
Workaround: When formatting disks, you must select one of the following options:
Once all disk blocks have been written, there will be no difference in performance between the two choices.
- Format disks on first access (default option): Takes less time for installation.
- Format disks immediately: Takes more time for installation but improves the disk performance of the installed cluster.
- Updated Clarification: additional information to Installing and Running VSA Cluster Service in the documentation
The VSA cluster service is required by a VSA cluster with two members. You can install the service separately on a variety of 64-bit platforms, including Windows Server 2003, Windows Server 2008, Windows 7, Linux RedHat, and SUSE Linux Enterprise Server.
Only install the service on 64 bit operating systems.