VMware

Build a Flexible, Efficient Datacenter

vSphere with Operations Management combines the world’s leading virtualization platform with VMware’s award winning management capabilities. This new solution enables IT to gain operational insight into the virtual environment providing improved availability, performance, and capacity utilization. Run business applications confidently to meet the most demanding service level agreements at the lowest TCO.


VMware vSphere, the industry-leading virtualization platform for building cloud infrastructures, enables you to run business critical applications with confidence and respond to the business faster. (4:15 mins.)
Share
       

Storage vMotion

Avoid application downtime for planned storage maintenance by migrating live VM disk files within and across storage arrays

 
 

At a Glance

Perform live migration of virtual machine disk files within and across storage arrays with vSphere Storage vMotion. Relocate virtual machine disk files while maintaining continuous service availability and complete transaction integrity.

  • Simplify array migrations and storage upgrades
  • Dynamically optimize storage I/O performance
  • Efficiently manage storage capacity

Simplify Array Migrations and Storage Upgrades
Eliminate service disruptions with live, automated migration of virtual machine disk files from existing storage to their new destination. Non-disruptive migration of virtual machine disk files to different classes of storage enables cost-effective management of virtual machine disks based on usage and priority policies as part of a strategy for tiered storage.

  • Perform zero downtime storage migrations with complete transaction integrity
  • Migrate the disk files of virtual machines running any supported operating system on any supported server hardware
  • Perform live migration of virtual machine disk files across any Fibre Channel, iSCSI, FCoE, and NFS storage system supported by vSphere

Dynamically Optimize Storage I/O Performance
Optimize storage I/O performance through non-disruptive movement of virtual machine disk files to alternative LUNs that are better architected to deliver the required performance. Cut costs by eliminating over-allocation of storage resources in order to deal with I/O bottlenecks.

  • Manage storage performance issues without scheduled downtime
  • Proactively deal with storage bottlenecks before they become major issues
  • Core technology behind Storage DRS for automating storage performance management

Manage Storage Capacity More Efficiently
Reclaim unused or “stranded” storage capacity and allocate to other virtual machines. Storage vMotion enables efficient storage utilization to avoid performance problems before they occur by non-disruptively moving virtual machines to larger capacity storage LUNs as virtual machine disk files approach their total available LUN size limits. Unused storage capacity can be reclaimed.

  • Move virtual machine disk files between different classes of storage as project needs change
  • Migrate virtual machines with highest performance needs to newly acquired storage
  • Shift lower priority machines to slower or older arrays, freeing high performance storage for more important workloads

Storage vMotion is included in all VMware vSphere editions.

 
 

Technical Details

VMware vSphere 5.0 Storage vMotion Enhancements

Storage vMotion in vSphere 5.0 received multiple enhancements to increase efficiency of the Storage vMotion process, improve overall performance and enhance supportability. Storage vMotion in vSphere 5.0 supports the migration of virtual machines with vSphere Snapshots present, as well as the migration of linked clones.

Prior to vSphere 5.0, Storage vMotion used a mechanism called Change Block Tracking (CBT). This method used iterative copy passes to first copy all blocks in a VMDK to the destination datastore, then used the changed block tracking map to copy blocks that were modified on the source during the previous copy pass to the destination. The CBT method used multiple passes to eventually “converge” on a small set of block that needed to be finally copied to make source and destination identical. When the Storage vMotion process got down to a small enough set of blocks, it would Fast Suspend the virtual machine on the source, make the final copy and then Resume the virtual machine on the destination datastore. This method worked well, except in specific corner cases where it was difficult to narrow down the changed block tracking map to a small enough number of blocks to make the final copy pass in a small number of seconds.

For vSphere 5.0, Storage vMotion was enhanced to use a new method called Mirror Mode. At the high level, Mirror Mode uses a one-pass copy of data from source to destination datastore, with changed blocks on the source datastore mirrored to the destination. Storage vMotion uses the VMkernel Datamover to perform that data transfer between source and destination datastores, with the Mirror Driver managing how virtual machine writes are applied during the copy.

The high level Storage vMotion sequence for vSphere 5.0 is as follows:

  1. Fast “Stun” the virtual machine to place the mirror driver in place, and allow the data mover to copy the first region of the source VMDK to destination datastore.
    • VPXA process copies the working directory of the virtual machine to the destination datastore
    • A “shadow” virtual machine is started on the destination datastore using the copied files. The “shadow” virtual machine idles, waiting for the copying of the virtual machine disk file(s) to complete.
  2. Resume the source virtual machine to allow normal processing
  3. The data mover continues to copy source VMDK to destination.
  4. For writes occurring during the copy process, the mirror driver does one of the following:
    • For writes to regions already copied by the data mover, mirror the writes to the destination
    • For writes to the region currently being copied by the data mover, queue the writes until the data mover has completed the copy of that region, then mirror the writes to source and destination
    • For writes occurring to regions that have not yet been copied by the data mover, write to source VMDK only, as the data mover will copy later when it reaches that region
  5. Storage vMotion invokes a Fast Suspend and Resume of the virtual machine (similar to vMotion) to transfer operation of the running virtual machine over to the idling shadow virtual machine.
  6. After the Fast Suspend and Resume completes, the old home directory and VM disk files are deleted from the source datastore.

The enhancements to Storage vMotion in vSphere 5.0 increase it’s efficiency, and migration time predictability, making it easier to plan migrations and reducing the elapsed time per migration. 

In vSphere 5.1, up to four parallel disk copies per Storage vMotion operation can now be performed. In previous versions, vSphere serially copied disks belonging to a virtual machine. For example, if a request to copy six disks in a single Storage vMotion operation is received, the first four copies are initiated simultaneously. Then, as soon as one of the first four finishes, the next disk copy is started. In order to reduce performance impact on other virtual machines, parallel disk copies only apply to Storage vMotion of multiple virtual machine disk files from multiple distinct datastores to multiple distinct datastores. This means that if a virtual machine has disk files on datastore A, B, C and D, parallel disk copies will on happen if destination datastores are E, F, G and H. The common use case of parallel disk copies is the migration of a virtual machine configured with an anti-affinity rule inside a Storage DRS datastore cluster.

Copies of multiple virtual machine disks from a single datastore to another single datastore, single to multiple or multiple to single datastores will use the traditional serial copy method.

Figure 7. Storage vMotion

If you are moving disks between different datastores in a single Storage vMotion operation, this should speed things up significantly. The limit of eight concurrent Storage vMotion operations doesn’t directly relate to the parallel disk copy change. For example, even if only one Storage vMotion operation is issued (leaving room for another seven operations on the target datastores), that single operation might be moving multiple disks related to a virtual machine.