Dynamically Allocate I/O Capacity Across Multiple Hosts

VMware vSphere® Storage I/O Control is used to provide I/O prioritization for virtual machines running on a group of VMware vSphere® hosts that have access to a shared storage pool. It extends the familiar constructs of shares and limits, which exist for CPU and memory, to address storage utilization through a dynamic allocation of I/O capacity across a cluster of vSphere hosts. It increases administrator productivity by reducing active performance management.

Storage I/O Control can trigger device-latency monitoring that hosts observe when communicating with that datastore. When latency exceeds a set threshold, the feature engages to relieve congestion. Each virtual machine that accesses that datastore is then allocated I/O resources in proportion to their shares.

VMware Product Demo: Storage I/O Control

Align Storage Resources to Meet Your Business Needs

Use Storage I/O Control to configure rules and policies to specify the business priority of each virtual machine. When I/O congestion is detected, Storage I/O Control dynamically allocates the available I/O resources to virtual machines according to your rules, improving service levels for critical applications and allowing you to virtualize more workloads, including I/O-intensive applications.

  • Set, view and monitor storage resource shares and limits.
  • Set and enforce storage priorities (per virtual machine) across a group of vSphere hosts.
  • Reduce your need for storage volumes dedicated to a single application, thereby increasing your infrastructure flexibility and agility.

Technical Details

Per-Host vs. Multi-Host Storage Allocation

Traditional CPU and memory shares address resources on a single VMware® ESXi™ host. This means virtual machines are competing for limited memory and CPU resources contained within a single host. Shared storage resources in a vSphere infrastructure are different because vSphere must treat access to storage at a multi-host level, rather than on a per-host basis.

If the specified latency threshold has been reached (usually an average I/O latency of 30 ms) for the datastore, Storage I/O Control will resolve this imbalance by limiting the amount of I/O operations a host can issue for that datastore.

Resolving Storage Imbalances

Storage I/O Control operates as a “datastore-wide disk scheduler.” Once Storage I/O Control has been enabled for a specific datastore, it will monitor that datastore, summing up the disk shares for each of the VMDK files on it. Storage I/O Control will then calculate the I/O slot entitlement per ESXi host based on the percentage of shares virtual machines running on that host have relative to the total shares for all hosts accessing that datastore.

If the specified latency threshold has been reached (usually an average I/O latency of 30 ms) for the datastore, Storage I/O Control will resolve this imbalance by limiting the amount of I/O operations a host can issue for that datastore.

Dynamic Latency Threshold Settings

The default latency threshold for Storage I/O Control is 30 milliseconds. Not all storage devices are equal, so this default is set at a mid-range value. There are devices that will hit their natural contention point earlier than others, for example, SSDs. For these, the user should decrease the threshold.

However, manually determining the correct latency can be difficult. This means latency thresholds need to be determined for each device. Rather than using a default or user selection for latency threshold, vSphere 5.5 Storage I/O Control can automatically determine the best threshold for a datastore.

The latency threshold is set to the value determined by the I/O injector (a part of Storage I/O Control). When the I/O injector calculates the peak throughput, it then finds the 90 percent throughput value and measures the latency at that point to determine the threshold. vSphere administrators can change this set throughput value to another percent value or they can continue to input a millisecond value.

VmObservedLatency

VmObservedLatency is a new Storage I/O Control metric introduced in vSphere 5.1 that replaces the datastore latency metric used in previous versions. It measures the time between receipt by VMkernel of the I/O from the virtual machine and receipt of the response from the datastore. Previously, latency was measured only after the I/O had left the ESXi host. VmObservedLatency also measures latency in the host and will be visible in the vSphere Client.

Before SIOC

Before SIOC

Before SIOC