Storage I/O Control
Storage I/O Control (SIOC) is a new feature first introduced in vSphere 4.1 to provide I/O prioritization of virtual machines running on a cluster of ESX/ESXi servers that access a shared storage pool. It extends the familiar constructs of shares and limits, which have existed for CPU and memory, to address storage utilization through a dynamic allocation of I/O queue slots across a cluster of servers. When a certain latency threshold is exceeded for a given block-based storage device, SIOC will balance the available queue slots across a collection of ESX servers to align the importance of certain workloads with the distribution of available throughput. It can reduce the I/O queue slots given to virtual machines that have a low number of shares in the interest of providing more I/O queue slots to a virtual machine with a higher number of shares.
SIOC provides a means of throttling back I/O activity for certain virtual machines in the interest of other virtual machines getting a more fair distribution of I/O throughput and an improved service level. In Figure 1 the two business-critical virtual machines (online store and MS Exchange) are adversely affected by a “noisy neighbor”, the data mining system. With SIOC enabled, the exchange and online store virtual machines are are provided more I/O slots than the less important (data mining) virtual machine.
For SIOC to engage in optimizing I/O to a given datastore there are two things that must be present.
- The datastore must have this feature enabled. This is done by changing a property setting of that datastore.
- There must be a sustained average latency detected across the hosts (ESX servers) that share that datastore. The default threshold is 30ms and can be modified through the advanced setting options for the datastore properties.
Once both of those conditions are met, SIOC engages in proactively managing the I/O queues across all ESX servers that share the datastore. It evaluates the percentage of I/O shares each virtual machine has relative to the total number of shares of all virtual machines accessing the datastore and will assign a relative number of I/O queue slots to ensure that high-priority virtual machines get more throughput and less latency than the lower-priority virtual machines. SIOC will throttle back the I/O slots on one ESX server in which a low-priority virtual machine might be the only workload running, to free up I/O queues on another server that might have several virtual machines running.
SIOC provides a dynamic allocation mechanism that adjusts to changing conditions of a mixed workload. It leverages the I/O shares, which are set on the Virtual Machine Properties for each virtual disk, to distribute available I/O slots to ensure a quality of service is enforced not just at the host level, but across the collection of hosts that are sharing that datastore. This feature provides a new means for the vSphere administrator to achieve higher levels of consolidation with the confidence that shared resource pools will not result in low-priority workloads limiting the performance of higher-priority workloads. SIOC also benefits your virtual environment by providing I/O distribution fairness even when all virtual machines running on a cluster of ESX servers sharing a datastore have equal or default I/O shares.
In vSphere 4,1, SIOC works only with block-based datastores and datastores that reside on a single extent and are managed by one vCenter management server. vSphere 5.0 extends Storage I/O Control (SIOC) to provide cluster-wide I/O shares and limits for NFS datastores. This means that no single virtual machine should be able to create a bottleneck in any environment regardless of the type of shared storage used. SIOC automatically throttles a virtual machine which is consuming a disparate amount of I/O bandwidth when the configured latency threshold has been exceeded to allow for other virtual machines using the same datastore receiving their fair share of I/O. Storage DRS and SIOC are the perfect partners for preventing deprecation of service level agreements and while providing long term and short term I/O distribution fairness.