VMware ESX software has been tested and deployed in a variety of storage network environments. This guide describes the storage devices currently tested by VMware and its storage partners.
ESX, ESXi Embedded and ESXi Installable are equivalent products from a storage compatibility perspective. In this guide we only explicitly list ESX compatibility information. If a product is listed as supported for ESX, the product is also supported for ESXi Embedded and ESXi Installable corresponding versions.
Note: Boot from SAN (Fibre Channel, Fibre Channel over Ethernet, SAS, or iSCSI) is not supported with ESXi version 4.0 and earlier.
If you are having a technical issue with 3rd party HW/SW and it is not found on this list, please refer to our 3rd Party HW/SW support policy at
http://www.vmware.com/support/policies/ThirdParty.html.
VMware works closely with each of its OEMs to drive towards mutual support of ESX at the time of announcement. Due to different product release cycles, levels of testing, and OEM agreements, not all OEM devices will be supported at the general availability date of a new version of ESX. We recommend contacting the OEM vendor for the best information on when their device is planned to be certified with Virtual Infrastructure.
For further details about array firmware, storage product configurations and best practices, please contact the storage vendor.
NOTE: The use of an external enclosure, or JBOD connected to a supported SAS/SCSI controller in a supported server is supported, as long as there is no disk sharing among multiple servers or SAS/SCSI cards.
This SAN HCL lists storage devices starting with ESX 3.0.x onwards. It does not include older ESX 2.5.x or earlier versions listed in
http://www.vmware.com/pdf/esx_SAN_guide.pdf.
Please contact your storage vendors if you do not find devices certified in the SAN HCL list.
Microsoft Windows Failover Cluster with ESX Windows Clustering refers Cluster Services in Windows operating systems in a shared disk configuration between two virtual machines or a virtual machine and a physical system. Such clustering is certified only with a subset of arrays listed in this guide.
Previously Failover Clustering was called MSCS.
Before installing VMware ESX software with your storage array, please examine the lists on the following pages to find out whether your array and configuration are supported. Please refer to your storage vendor for more information and configuration details.
Windows Failover Cluster support with ESX 3.0.x Below table shows the supported list of Windows OS, FC HBA speed and drivers.
Windows Failover Cluster support with ESX 3.5.x MSCS/Failover cluster is supported with ESX 3.5 with both 32 bit and 64 bit VMs running
- Windows 2000 SP4 and
- Windows 2003 (x86 and x64) up to and including SP2
Only 4Gb Emulex and Qlogic FC HBAs are supported.
Windows Failover Cluster support with ESX 4.0 With native multipathing (NMP), clustering is not supported when the path policy is set to round robin. Please see 'Setup for Failover Clustering and Microsoft Cluster Services' for limitations on MSCS support with PSA.
1. Virtual SCSI adapter and Windows OS supported
- LSI Logic Parallel for Windows Server 2000 SP4
- LSI Logic Parallel for Windows Server 2003 RTM (x86 and x64) up to and including SP2
- LSI Logic SAS for Windows Server 2008 (x86 and x64) up to SP1
2. Only 4Gb Qlogic and Emulex Fibre Channel HBAs are supported.
The driver versions supported are as follows:
qla2xxx-400.821.kl.38vmw
qla4xxx-400.5.01.00.vml
lpfc820-400.2.0.30.49vmw
PSA Plug-ins with ESX 4.x and ESX 5.x Array operating modes and path selection behavior are supported through the Pluggable Storage Architecture (PSA) framework. Storage partners may (1) provide their own Multi-Pathing Plug-ins (MPP), (2) use Storage Array Type Plug-ins (SATP) and Path Selection Plug-ins (PSP) offered by VMware 's Native Multi-pathing (NMP) or (3) provide their own SATP and PSP.
The plug-ins supported with a storage array are noted in the 'Mode' and 'Path Policy' columns of the 'Model/Release Details' page. The path policy of VMW_PSP_RR may also be supported but not be necessarily listed in the 'Mode' and 'Path Policy' columns of the 'Model/Release Details' page. Storage partner may recommend VMW_PSP_RR for path failover policy for certain storage array models. If desired, contact the storage array manufacture for recommendation and instruction to set VMW_PSP_RR appropriately.
With native multipathing (NMP), clustering is not supported when the path policy is set to round robin. Please see 'Setup for Failover Clustering and Microsoft Cluster Services' for limitations on MSCS support with PSA.
Note that VMware does not currently support unloading of third-party PSA plug-ins on a running ESX/ESXi host.
Installation, upgrade or removal of third-party PSA plug-ins may require a reboot of the ESX/ESXi host.
Please refer to the third party product's documentation for details on right procedures for installation, upgrade or removal.
Fibre Channel SANs For Fibre Channel SANs, if listed, VMware supports the following configuration, unless footnoted otherwise:
* Basic Connectivity - The ability of ESX hosts to recognize and interoperate with the storage array. This configuration does not allow for multipathing or any type of failover.
* Multipathing - The ability of ESX hosts to handle multiple paths to the same storage device.
* HBA Failover - In this configuration, the ESX host is equipped with multiple HBAs connecting to one or more SAN switches. The server is robust to HBA and switch failure only.
* Storage Port Failover - In this configuration, the ESX host is attached to multiple storage ports and is robust to storage port failures.
* Boot from SAN - In this configuration, the ESX host boots from a LUN stored on the SAN rather than a local disk.
* Direct Connect - In this configuration, the ESX host is directly connected to the array. There is no switch between HBA and the array and each HBA port must be connect directly to a storage port on the array. Multiple storage ports also known as FC target port on single Fiber Channel-Arbitrated Loop are not supported; therefore, daisy-chaining of storage ports within a storage controller, across storage controller, or across multiple arrays is not supported. Windows Failover Clustering is not supported in this configuration.
NOTE: Windows Failover Clustering (MSCS) support applies to Windows 2000 SP4, Windows 2003 RTM, SP 1, R2 and SP 2. For ESX version requirements for these operating systems in cluster environment, please refer to
http://kb.vmware.com/kb/2021. Windows Failover Clustering is supported only with a limited set of HBAs; please refer to the I/O Compatibility Guide (
http://www.vmware.com/pdf/vi3_io_guide.pdf) for the list of HBAs not supported with Windows Failover Clustering.
NOTE: MSCS is not supported, unless footnoted.
NOTE: MSCS is not supported for Direct Connect.
NOTE: Only footnoted storage arrays are supported with Brocade 415 and 425 HBAs.
NOTE: Unless otherwise footnoted, all fibre channel arrays are supported with both 2Gb and 4Gb FC HBAs on ESX
NOTE: For ESX 3.5 U2 onwards unless otherwise footnoted, all fibre channel arrays are supported with 2Gb, 4Gb and 8Gb FC HBAs on ESX.
NOTE: For devices with external SAN storage support, please refer to Storage Virtualization Device (SVD).
NOTE: Unless otherwise noted, all fibre channel storage products are supported in a boot from SAN configuration.
NOTE: End-to-end connectivity at 8Gbps FC speed is supported with 8G FC arrays only if the product details has prefix of '8G FC' or is footnoted. Otherwise support for 8G FC arrays is limited to up to 4Gbps speed only.
Storage Virtualization Device (SVD) VMware supports Storage Virtualization Devices (SVD) with ESX 3.0.2 or later, ESX/ESXi 4.x and ESXi 5.
* Gateways are included as part of SVDs.
* Backend storage arrays must be listed on both the ESX Storage/SAN Compatibility Guide and the SVD Vendor supported list. Back-end array and the SVD are both required to be certified with the same ESX release.
* Do not share the same LUN of the backend storage array between SVD and any other host.
* Only devices that are listed with Array Type SVD are allowed to connect to external Fibre Channel SAN storages.
Storage Arrays supported with FCoE CNAs VMware supports Fibre Channel (FC) Arrays connected to Fibre Channel Over Ethernet (FCoE) Converged Network Adapters (CNAs) with ESX 3.5 U4 and newer releases. Native FCoE arrays connected to FCoE CNAs are also supported with ESX 4.0 and newer releases.
Network Attached Storage The following Linux distributions are supported network attached storages
ESX 3.x * Red Hat Enterprise Linux 5 NFS Server (Update 2).
* Red Hat Enterprise Linux 3 NFS Server (Update 5).
* Fedora Core 4 NFS Server (2.6.12-1.1456_FC4.9550smp).
* Fedora Core 6 NFS Server (2.6.18-1.2798.fc6 #1 SMP) for ESX 3.5 only.
ESX 4.0 Fedora Core 8 NFS Server
NOTE: Windows Clustering (MSCS) is not supported with NAS.
iSCSI VMware supports connections to iSCSI arrays using the following iSCSI initiators:
Software iSCSI Adapter A software iSCSI adapter is a VMware code built into the VMkernel. It allows the host to connect to the iSCSI storage device through standard network adapters. The software iSCSI adapter handles iSCSI processing while communicating with the network adapter. With the software iSCSI adapter, you can use iSCSI technology without purchasing specialized hardware.
Hardware iSCSI Adapter A hardware iSCSI adapter is a third-party adapter that offloads iSCSI and network processing from your host. Hardware iSCSI adapters are divided into categories.
Dependent Hardware iSCSI Adapter: Depends on VMware networking, and iSCSI configuration and management interfaces provided by VMware. This type of adapter can be a card that presents a standard network adapter and iSCSI offload functionality for the same port. The iSCSI offload functionality depends on the host's network configuration to obtain the IP, MAC, and other parameters used for iSCSI sessions. An example of a dependent adapter is the iSCSI licensed Broadcom 5709 NIC.
Independent Hardware iSCSI Adapter: Implements its own networking and iSCSI configuration and management interfaces. An example of an independent hardware iSCSI adapter is a card that either presents only iSCSI offload functionality or iSCSI offload functionality and standard NIC functionality. The iSCSI offload functionality has independent configuration management that assigns the IP, MAC, and other parameters used for the iSCSI sessions. An example of a independent adapter is the QLogic QLA4052 adapter.
Hardware iSCSI adapters might need to be licensed. Otherwise, they will not appear in the vSphere Client or vSphere CLI. Contact the adapter's vendor for licensing information.
Please refer to the I/O Compatibility Guide for a list of hardware iSCSI adapters and NIC that can be used with ESX.
For iSCSI storage with the Software and Dependent Hardware iSCSI Adapter, if listed, VMware supports the following configuration, unless footnoted otherwise:
* iSCSI Base Connectivity - The ability of an ESX host to recognize the target and interoperate with it.
* SP failover - In this configuration the ESX host is attached to multiple ports and is robust to storage port failover
* Boot from iSCSI - In this configuration, ESX hosts boot from the target iSCSI array rather than from a local disk. This is only supported on ESXi starting with ESXi 4.1 and requires the NIC to support iSCSI Boot Firmware Table (iBFT).
* NIC failover for software initiator - If the Ethernet adapters are teamed and one fails, the other one takes over. Both adapters must be connected to the same physical switch and be on the same subnet (both NICs and iSCSI storage ports).
* iSCSI initiator failover - The ESX host is equipped with multiple software and/or dependent hardware iSCSI adapters and is robust to iSCSI adapter failover. This is supported is starting with ESX 4.0 and later versions.
For iSCSI storage with the Independent Hardware iSCSI Adapter, if listed, VMware supports the following configuration, unless footnoted otherwise:
* iSCSI Base Connectivity - The ability of an ESX host to recognize the target over an iSCSI HBA and interoperate with it.
* SP failover - In this configuration, ESX host is attached to multiple ports over an Hardware iSCSI HBA and is robust to storage port failover.
* Boot from iSCSI - In this configuration, ESX hosts boot from the target iSCSI array rather than from a local disk.
* iSCSI initiator failover - The ESX host is equipped with multiple independent hardware iSCSI adapters and is robust to iSCSI adapter failover.
NOTE: Windows Clustering is not supported with iSCSI.
NOTE: Software initiated iSCSI is supported fully in ESX 3.0 and later releases. Hardware initiated iSCSI is supported in experimental mode only in ESX 3.0. It is supported fully in ESX 3.0.1 and later with iSCSI arrays that have been qualified/certified for use with the hardware initiators.
NOTE: Dependent Hardware iSCSI adapters are supported starting with ESX 4.1 and later versions.
SAS Arrays For SAS Arrays, if listed, VMware supports the following configuration, unless footnoted otherwise:
* Basic Connectivity - The ability of ESX hosts to recognize and interoperate with the storage array. This configuration does not allow for multipathing, any type of failover, or sharing of LUNs between multiple hosts.
* Direct Connect - In this configuration, the ESX host is directly connected to the array (that is, no switch between HBA and the array). Windows Clustering is not supported in this configuration.
* LUN sharing - The ability of multiple ESX hosts to share the same LUN.
* Multipathing - The ability of ESX hosts to handle multiple paths to the same storage device.
* HBA Failover - In this configuration, the ESX host is equipped with multiple HBAs connecting directly to the array. The server is robust to HBA failure only.
* Storage Port Failover - In this configuration, the ESX host is attached to multiple storage ports on the same array and is robust to storage port failures.
* Boot from SAS - SAS boot is supported unless explicitly stated in a footnote for a specific array.
Storage IO Control (SIOC) feature in vSphere 4.1, 5,0, and 5.1: SIOC feature is available with VMware vSphere 4.1T GA. SIOC is a QoS feature that is disabled by default. When enabled, this feature monitors ESX-array latency to determine when a datastore is congested. When congestion is detected (latency above a threshold), SIOC allocates the limited I/O resources to virtual machines in accordance to their relative importance (user-specified). All the storage devices listed on vSphere 4.1 Storage HCL are supported for use with SIOC. There is no special certification requirements for the SIOC support.
vSphere Metro Cluster Storage (vMSC)in vSphere 5.0 VMware supports Metro Cluster Storage with vSphere 5.0 where the storage
systems present internally synchronously replicated storage by storage
controllers and present such storage as a single LUN from different
geographically distributed sites.
Only array-based a synchronous replication is supported and asynchronous replication is not supported. Storage Array types FC, iSCSI, SVD, and FCoE are supported. NAS devices are not supported with vMSC configuration.
Host Access configuration The following terms describe the accessibility from ESXi to storage device with appropriate vSphere Native Multi Pathing.
- Uniform host access configuration - When ESXi hosts from various sites
are all connected to a storage node in the storage cluster across all
sites. Paths presented to ESXi hosts are stretched across distance.
- Non-Uniform host access configuration - ESXi hosts in each site are connected
only to storage node(s) in the same site. Paths presented to ESXi hosts from
storage nodes are limited to local site.
Inter-site links The stretch cluster configuration has two intersite links; a link between ESXi hosts at different sites, and a link between storage controllers at different sites. The latency requirements for these links are as follows:
- The supported latency between the ESXi ethernet networks sites must be less than 10 milliseconds RTT.
- The supported latency for synchronous storage replication must be less than 5 milliseconds RTT
In the event of Inter-site link failure between storage controllers, the I/O continues at the biased side of storage device access. On the unbiased side the I/O fails. Although unlikely, if a running virtual machine is on the unbiased side, it is restarted on the biased side. The site bias is set by vendor specific interface. ESXi does not set the site bias.
VAAI Block Thin Provisioning Space Reclamation Schedule Changes in vSphere 5.0:
Space Reclamation part of the VAAI Block Thin Provisioning has been delayed. For details please refer to the Knowledge Base article 2007427 on how to disable the UNMAP command used for the Space Reclamation (http://kb.vmware.com/kb/2007427).
Windows Failover Cluster support with ESX 5.0 is
1. Virtual SCSI adapter and Windows OS supported
- LSI Logic Parallel for Windows Server 2000 SP4
- LSI Logic Parallel for Windows Server 2003 RTM (x86 and x64) up to and including SP2
- LSI Logic SAS for Windows Server 2008 (x86 and x64) up to SP2
- LSI Logic SAS for Windows Server 2008 R2 (x86 and x64) SP1
2. Only 4Gb Qlogic and Emulex Fibre Channel HBAs are supported.
The driver versions supported are as follows:
qla2xxx-911.k1.1-19vmw
lpfc820-8.2.2.105.36vmw