Storage vMotion
Avoid application downtime for planned storage maintenance by migrating live VM disk files within and across storage arrays
vSphere with Operations Management combines the world’s leading virtualization platform with VMware’s award winning management capabilities. This new solution enables IT to gain operational insight into the virtual environment providing improved availability, performance, and capacity utilization. Run business applications confidently to meet the most demanding service level agreements at the lowest TCO.
Avoid application downtime for planned storage maintenance by migrating live VM disk files within and across storage arrays
Perform live migration of virtual machine disk files within and across storage arrays with vSphere Storage vMotion. Relocate virtual machine disk files while maintaining continuous service availability and complete transaction integrity.
Simplify Array Migrations and Storage Upgrades
Eliminate service disruptions with live, automated migration of virtual machine disk files from existing storage to their new destination. Non-disruptive migration of virtual machine disk files to different classes of storage enables cost-effective management of virtual machine disks based on usage and priority policies as part of a strategy for tiered storage.
Dynamically Optimize Storage I/O Performance
Optimize storage I/O performance through non-disruptive movement of virtual machine disk files to alternative LUNs that are better architected to deliver the required performance. Cut costs by eliminating over-allocation of storage resources in order to deal with I/O bottlenecks.
Manage Storage Capacity More Efficiently
Reclaim unused or “stranded” storage capacity and allocate to other virtual machines. Storage vMotion enables efficient storage utilization to avoid performance problems before they occur by non-disruptively moving virtual machines to larger capacity storage LUNs as virtual machine disk files approach their total available LUN size limits. Unused storage capacity can be reclaimed.
Storage vMotion is included in all VMware vSphere editions.
VMware vSphere 5.0 Storage vMotion Enhancements
Storage vMotion in vSphere 5.0 received multiple enhancements to increase efficiency of the Storage vMotion process, improve overall performance and enhance supportability. Storage vMotion in vSphere 5.0 supports the migration of virtual machines with vSphere Snapshots present, as well as the migration of linked clones.
Prior to vSphere 5.0, Storage vMotion used a mechanism called Change Block Tracking (CBT). This method used iterative copy passes to first copy all blocks in a VMDK to the destination datastore, then used the changed block tracking map to copy blocks that were modified on the source during the previous copy pass to the destination. The CBT method used multiple passes to eventually “converge” on a small set of block that needed to be finally copied to make source and destination identical. When the Storage vMotion process got down to a small enough set of blocks, it would Fast Suspend the virtual machine on the source, make the final copy and then Resume the virtual machine on the destination datastore. This method worked well, except in specific corner cases where it was difficult to narrow down the changed block tracking map to a small enough number of blocks to make the final copy pass in a small number of seconds.
For vSphere 5.0, Storage vMotion was enhanced to use a new method called Mirror Mode. At the high level, Mirror Mode uses a one-pass copy of data from source to destination datastore, with changed blocks on the source datastore mirrored to the destination. Storage vMotion uses the VMkernel Datamover to perform that data transfer between source and destination datastores, with the Mirror Driver managing how virtual machine writes are applied during the copy.
The high level Storage vMotion sequence for vSphere 5.0 is as follows:
The enhancements to Storage vMotion in vSphere 5.0 increase it’s efficiency, and migration time predictability, making it easier to plan migrations and reducing the elapsed time per migration.
In vSphere 5.1, up to four parallel disk copies per Storage vMotion operation can now be performed. In previous versions, vSphere serially copied disks belonging to a virtual machine. For example, if a request to copy six disks in a single Storage vMotion operation is received, the first four copies are initiated simultaneously. Then, as soon as one of the first four finishes, the next disk copy is started. In order to reduce performance impact on other virtual machines, parallel disk copies only apply to Storage vMotion of multiple virtual machine disk files from multiple distinct datastores to multiple distinct datastores. This means that if a virtual machine has disk files on datastore A, B, C and D, parallel disk copies will on happen if destination datastores are E, F, G and H. The common use case of parallel disk copies is the migration of a virtual machine configured with an anti-affinity rule inside a Storage DRS datastore cluster.
Copies of multiple virtual machine disks from a single datastore to another single datastore, single to multiple or multiple to single datastores will use the traditional serial copy method.
Figure 7. Storage vMotion
If you are moving disks between different datastores in a single Storage vMotion operation, this should speed things up significantly. The limit of eight concurrent Storage vMotion operations doesn’t directly relate to the parallel disk copy change. For example, even if only one Storage vMotion operation is issued (leaving room for another seven operations on the target datastores), that single operation might be moving multiple disks related to a virtual machine.