VMware

VMware Data Recovery 1.2 Release Notes

Features | Documentation | Knowledge Base | Communities

Data Recovery | 10 JUN 2010 | Build 260251

Last Document Update: 12 JUL 2010

Check frequently for additions and updates to these release notes.

This document contains the following sections:

Benefits and Features

Read about the benefits and features of this product at VMware Data Recovery Overview - VMware. For additional information about known issues and resolved issues, see:

Supported Environments

VMware Data Recovery 1.2 can be used with:

  • VMware vSphere 4.1
  • VMware vSphere 4.0 Update 2

Upgrading to Data Recovery 1.2

Previous Data Recovery installations are likely to have existing restore points that should be preserved. To ensure these restore points are preserved, it is important to use the following processes described in this section.

Begin the upgrade process by installing the latest Data Recovery plug-in for the vSphere client.

To install the latest Data Recovery plug-in

  1. Close the vSphere Client.
  2. Use the Add or Remove Programs item in the Control Panel to uninstall any previous versions of the VMware Data Recovery plug-in.
  3. Start the latest Data Recovery plug-in Windows Installer File (.msi) to install the Data Recovery plug-in.

Next you must deploy the new Data Recovery appliance without deleting existing restore points. If the destination volume for the deduplication store is a virtual disk, do not to delete the appliance. Deleting the appliance deletes the disks connected to the appliance. This would cause the backup data stored in the deduplication store to be deleted. To avoid such an issue, complete the following procedure:

To upgrade Data Recovery appliances with virtual disks or RDMs

  1. IMPORTANT: Before upgrading to VMware Data Recovery 1.2, make sure all operations in your current 1.1 environment have completed before shutting down and performing the upgrade. If an integrity check or reclaim operation is running, allow the operation to complete. Do not CANCEL these operations.
  2. When no operations are running, unmount the destination disk and shut down the Data Recovery appliance.
  3. If you want to save the original Data Recovery appliance, rename it in some way. For example, you might rename an appliance called VMware Data Recovery to VMware Data Recovery - OLD.
  4. Deploy the new appliance.
  5. Use the datastore browser to move the disk containing the deduplication store to the same location as the new appliance.
  6. Edit the settings of the new appliance:
    1. Choose a Add > Hard Disk.
    2. Choose Use an Existing Virtual Disk
    3. Browse to the data store and select the virtual disk which is connected to the older appliance as destination.
    4. Choose the SCSI address.
    5. Choose Finish.
  7. Power on the new appliance.
  8. Edit the settings of the older appliance:
    1. Choose the hard disk which is used to store deduplication store.
    2. Select Remove, leave default option for remove from virtual machine. DO NOT select remove from virtual machine and delete files from the disk.
    3. Click OK.
  9. Configure networking on the new appliance.
  10. Use the Data Recovery vSphere plug-in to connect to the backup appliance.
  11. Complete the getting started wizard. Note that you should mount the desired disk, but do not format it. Formatting the disk will erase all deduplication store data. The disk to be used may not display the expected name, but the proper name will be displayed after the wizard is completed.
  12. You are prompted to restore the configuration from the deduplication store. Select Yes if you want to restore the jobs and backup events and history.
  13. The client disconnects from the appliance after the configuration is restored and then reestablishes the connection. This may take several minutes.
  14. Once the client reconnects, check to see if a Reclaim or Integrity check operation has started. If so, STOP the operation.
  15. Immediately click Configure > Destinations and perform an integrity check on all mounted destinations.
  16. Verify the backup job configuration.
  17. Remove the older VMware Data Recovery appliance from the inventory.

Note that damaged restore points may cause the upgrade to fail. If you receive the message "Could not restore the Data Recovery appliance configuration" re-add the destination to the original Data Recovery appliance and then run an integrity check to clean up any damaged restore points. After the integrity check completes successfully, repeat the upgrade process.

If the destination volumes for the deduplication store is a CIFS share or an RDM, complete the following procedure:

To upgrade Data Recovery appliances with CIFS shares

  1. IMPORTANT: Before upgrading to VMware Data Recovery 1.2, make sure all operations in your current 1.1 environment have completed before shutting down and performing the upgrade. If an integrity check or reclaim operation is running, allow the operation to complete. Do not CANCEL these operations.
  2. When no operations are running, unmount the destination disk and shut down the Data Recovery appliance.
  3. If you want to save the original Data Recovery appliance, rename it in some way. For example, you might rename an appliance called VMware Data Recovery to VMware Data Recovery - OLD.
  4. Delete the older Data Recovery appliance.
  5. Deploy the new appliance.
  6. Power on the new appliance.
  7. Configure networking on the new appliance.
  8. Use the Data Recovery vSphere plug-in to connect to the backup appliance.
  9. Complete the getting started wizard. On the Backup Destination page, click the Add Network Share link and enter the appropriate information for your particular CIFS share.
  10. You are prompted to restore the configuration from the deduplication store. Select Yes if you want to restore the jobs and backup events and history.
  11. The client disconnects from the appliance after the configuration is restored and then reestablishes the connection. This may take several minutes.
  12. Once the client reconnects, check to see if a Reclaim or Integrity check operation has started. If so, STOP the operation.
  13. Immediately click Configure > Destinations and perform an integrity check on all mounted destinations.
  14. Verify the backup job configuration.
  15. Remove the older VMware Data Recovery appliance from the inventory.

Note that damaged restore points may cause the upgrade to fail. If you receive the message "Could not restore the Data Recovery appliance configuration" re-add the destination to the original Data Recovery appliance and then run an integrity check to clean up any damaged restore points. After the integrity check completes successfully, repeat the upgrade process.

Enhancements

The following enhancements have been made for this release of Data Recovery.

  • File Level Restore (FLR) is now available for use with Linux.
  • Each vCenter Server instance supports up to ten Data Recovery backup appliances.
  • The vSphere Client plug-in supports fast switching among Data Recovery backup appliances.
  • Miscellaneous vSphere Client Plug-In user interface enhancements including:
    • The means to name backup jobs during their creation.
    • Additional information about the current status of destination disks including the disk's health and the degree of space savings provided by the deduplication store's optimizations.
    • Information about the datastore from which virtual disks are backed up.

Resolved Issues

The following issues have been resolved since the last release of Data Recovery. The list of resolved issues below pertains to this release of Data Recovery only.

  • High DRS Or vMotion Activity on Protected Virtual Machines Causes Unnecessarily High CPU Utilization

    When a backup appliance protected virtual machines that were being significantly affected by vMotion or DRS, an unnecessarily large number of Data Recovery objects occurred in memory, causing high CPU utilization. This occurred because Data Recovery interpreted the results of vMotion or DRS operations as an increase in the number of objects, rather than a moving of objects. This issue has now been resolved.

  • If vCenter Server Becomes Unavailable Data Recovery Permanently Loses Connectivity

    If a vCenter Server was rebooted or lost network connectivity while Data Recovery was conducting backups, Data Recovery failed to re-establish connectivity with the vCenter Server until after currently running backup jobs completed. This caused all new backup operations to fail and the vSphere Client plug-in could not connect to the engine during this time. Data Recovery now attempts to reconnect to the vCenter Server at regular intervals. This occurs while backups are in progress, thereby minimizing potential failures.

  • Data Recovery Backups May Fail To Make Progress At The Start Of The Task

    During incremental backup of a virtual machine, Data Recovery would fail to make progress and the backup appliance would show 100% CPU. This condition persisted until the appliance is restarted. Even after restarting the backup appliance, no further progress was made on the incremental backup. This was the result of how the backup appliance used information about the last backup to create a new backup.

  • The Backup Appliance Crashes When Backing Up Certain Disk Configurations

    Virtual machines can have disks that are not multiples of single megabytes. For example, it is possible to create a disk that is 100.5 MBs. Normally, disks created using the vSphere Client are always a multiple of one MB. Some virtual machines that included disks whose size was not of multiple of one MB caused the backup appliance to crash. These disk sizes are now handled properly.

  • Data Recovery Does Not Properly Track Individual Disks For Backup

    Data Recovery supports backing up a subset of disks in a virtual machine. Due to the way individual disks are tracked for backup, if a snapshot changed the name of the disk, problems occurred. In such a case, Data Recovery did not match the disk with the one selected in the backup job, showed the disk as selected, and did not back up the disk. This issue no longer occurs.

  • Data Recovery Fails To Check Disk Hot Add Compatibility And To Clean Up After Failures

    Data Recovery attempts to hot add the disks of the virtual machine being backed up to the backup appliance. It could occur that the datastore block size of the datastore hosting the client disks was bigger than the datastore block size of the appliance. If this was the case and if the disk size of the virtual machine's disk was larger than the size supported by the appliance, the hot add failed. This failure could occur after the successful hot adding of some disks. In such a case, Data Recovery did not hot remove the virtual machine's successfully added disks. Data recovery now checks datastore block sizes and disk sizes to ensure hot add operations will complete successfully before attempting the hot add.

  • Data Recovery Does Not Support Longer CIFS Passwords

    Data Recovery 1.1 did not support CIFS password over 16 characters. With this release, VDR support CIFS passwords of up to 64 characters.

  • The Backup Appliance Crashes If Disks Become Full

    In some cases, Data Recovery crashed if destination disks became full during backup. This no longer occurs.

  • Last Execution Shows Incorrect Time Stamp For Completed Jobs

    The backup tab displays information about backups including the last time each backup job was completed. The displayed last date and time that jobs were completed was incorrect. This has been fixed.

  • Data Recovery vSphere Client Plug-In Failed to Connect to vCenter Server

    Attempts by the vSphere Client plug-in to connect to the backup appliance failed if the vSphere Server's inventory contained a large number of virtual machines. This typically occurred when more than 1000 virtual machines were present in the inventory. This issue has been fixed.

  • Adding Virtual Disks Causes Problems With Snapshots And Subsequent Backups

    If a virtual machine had an existing snapshot and a new virtual disk is added to the virtual machine, the next backup succeeds, but the snapshot was left behind. This snapshot caused subsequent backups to fail. This issue has been resolved.

  • Backups Failed to Complete As Expected

    In certain situations, backups would fail to complete as expected, and no subsequent backups would occur. This issue has been resolved.

  • Restore Wizard Does Not Enforce Valid Datastore Selections

    Using the restore wizard, it was possible to specify datastore locations to which to restore virtual machines where the datastore was not valid. For example, when:

    • A datastore was renamed or deleted, the old information persisted, so an outdated name or non-existent store could be selected.
    • A virtual machine was to be restored to a cluster, datastores that were not on shared storage could be selected.

    Wizards have been modified so only valid selections are offered.

  • File Level Restore (FLR) In Admin Mode Mishandles Some Password Characters

    FLR in admin mode failed if password contained '/'. This occurred because FLR was using the '/' character as a delimiter. As a result, passwords that included '/' resulted in incomplete passwords being sent to vCenter Server for authentication. FLR now handles '/' in passwords correctly.

  • FLR Fails To Mount Virtual Disks Due To Connection Delimiter

    In some cases, mounting a virtual disk through the FLR client failed due to a connection delimiter. This has been fixed.

  • Windows FLR Fails To Mount Multiple Disks With The Same Name

    It is possible to back up a virtual machine with multiple disks with the same name. In such a case, each disk is associated with a vmdk located on a different datastore. when attempting to mount such disks through the Windows FLR client, only one of the virtual disks was mounted. This has been fixed and all disks will now be mounted.

Known Issues

The following known issues have been discovered through rigorous testing. The list of issues below pertains to this release of Data Recovery only.

  • Backup Appliance OVF File Publisher Shows No Certificate

    The OVF file containing the backup appliance does not have any information about the appliance's publisher or that publisher's certificate. The lack of publisher may be seen on the OVF Template Details which appears when deploying the backup appliance.

  • Backup Appliance Thin-Provisioned Disk Size is Unspecified

    Recent versions of the OVF deployment wizard display additional information that was not displayed in previous releases. In the Disk Format page of the Deploy OVF Template wizard, there is an option to store the appliance in a thin-provisioned format. The thin-provisioned disk size of the backup appliance OVF is not set for the backup appliance template, but this format can still be used without affecting appliance functionality.

  • ESX Servers with VMware HA Enabled May List Datastores Twice

    When restoring virtual machines to ESX servers that have VMware HA enabled, the restore location in the restore wizard may show the same datastore name twice. ESX servers can be configured to use shared storage and local storage. It is possible for multiple servers to use the same shared storage and it is possible for the local storage names to be similar among servers. For example, multiple servers could point to "SharedStorage1" and multiple servers could each have local storage called "DataStore1". This is especially problematic if users select locations for disk files and configuration files that are on separate servers, producing an invalid configuration. The potential for misconfiguration is not initially evident based on storage destination names, but an error identifies such an issue. If such an error occurs, select different combinations of storage destinations until an appropriate pair is found.

  • Imported Backup Jobs Backup All Disks

    It is possible to create backup jobs that back up a subset of all disks in a source virtual machine. If such a backup job is imported into Data Recovery 1.2, the backup job behavior changes so all disks are backed up. To resolve this issue, modify backup jobs, restoring their original settings.

  • Restore Tab May Not Display Latest Updates To Restore Points

    After running backup jobs, resulting restore points sometimes are not updated in the Restore tab. To resolve this issue disconnect and reconnect the Data Recovery vSphere Client plug-in by clicking Disconnect and then clicking Connect. The restore points are now visible in the Restore tab.

  • Backup Jobs Silently Fail To Start If vCenter Server Is Unavailable

    If vCenter Server is not available when the backup appliance starts, backup jobs may not start as expected and no warnings may be presented. Backups started manually by clicking Backup Now complete as expected. To resolve this issue, restart the backup appliance.

  • Deduplication Store Comments Not Initially Displayed

    Information in the comments field for the deduplication store is not always displayed. To check for comment information, click on the field, which causes the field contents to be updated.

  • Data Recovery Limitations when Restoring Virtual Machines with RDMs in Physical Compatibility Mode

    An RDM in Physical Compatibility mode cannot be protected by Data Recovery but other components of the virtual machine can be backed up and restored. These include the configuration, VMDKs and RDMs in Virtual Compatibility mode. A restore operation that overwrites these supported components on the source virtual machine will be successfully completed. However, a restore operation that creates a new virtual machine or replaces a deleted virtual machine will fail if the virtual machine that was backed up contained an RDM in Physical Compatibility mode. This is true even if the RDM in Physical Compatibility mode was not included in the backup job.

  • Documentation Incorrectly States Support for SUSE

    The VMware Data Recovery Administration Guide states that FLR supports SUSE Linux Enterprise Server 11.2. SUSE is not supported with this release.

  • FLR Cannot Mount LVM Volumes That Are Not In A Volume Group

    FLR cannot mount Logical Volume Manager (LVM) volumes in restore points if those LVM volumes are not part of a Volume Group. For FLR to be able to mount LVM volumes on a restore point, the LVM Volumes must have been added to an existing Volume Group or used to create a new Volume Group before the backup was created.

  • FLR LVM Mount Failures Can Result in the Redhat LVM GUI Incorrectly Displaying System Disks as Uninitialized

    If there is a failure of FLR to mount a LVM disk, Redhat's GUI LVM manager can incorrectly display system disks as uninitialized. Do not attempt to resolve this issue by initializing these volumes. Initializing these volumes removes all data. The volumes are fully functional and can be used as expected in the virtual machine. To resolve the issue in Redhat's tool, reboot the virtual machine or issue the vgmknodes <volumegroupname> command.

  • SELinux Prevents vdrFileRestore From Running

    SELinux prevents vdrFileRestore from loading the required library libvmacore.so.1.0. To resolve this issue, use one of the following three solutions:

    • Execute the command chcon -t textrel_shlib_t /usr/lib/vmware/vmacore/libvmacore.so.1.0
    • Change the SELinux policy to permissive
    • Disable SELinux
  • Linux FLR May Require Additional Loop Devices

    Linux FLR uses a loop device for each vmdk file in a restore point and each LVM physical volume detected. Most systems are configured to have 8 loop devices. FLR is unable to access additional restore points if there are no free loopback devices to accommodate additional vmdk files and LVM physical volumes. Increase the available loopback devices to reduce the potential for this limitation. The specific process for changing the number of available loopback devices varies among operating systems. For example, on some Redhat systems, add the following line to the modprobe.conf file:

    options loop max_loop=64

    After rebooting the system, 64 loopback devices are available.

  • File Level Restore Indicates Some Registry Data May Have Been Lost

    Data Recovery can create restore points for misconfigured virtual machines. File Level Restore can mount such restore points, but doing so may result in the display of registry error messages. The error may appear as follows: Registry hive (file): C:\DOCUME~1\ADMINI~1\LOCALS~1|Temps\Administrator\vixmntapi3 was damageded and it has been recovered. Some data might have been lost. These errors can be safely ignored.

  • Stale Deduplication Store Locks Prevent FLR Access

    If FLR fails to mount a restore point and the output in verbose mode (-v or --verbose) shows error 1315 in the log output, the failure may be due to a stale lock on the deduplication. The lack of information about what is preventing mounting may be confusing. Error 1315 can be attributed to other causes, but it typically indicates a deduplication store lock.

    Under normal conditions, wait for Data Recovery processes to complete, after which the deduplication store lock is automatically removed. If the lock is stale and is not removed normally, this issue can be resolved by manually removing the lock. To remove the lock, remove the following lock file from the deduplication store:

    /<dedupe root>/VMwareDataRecovery/BackupStore/store.lck

  • Source Disks May Appear To Become Deselected For Backup

    Sources that were previously selected for backup may appear to become unselected. This happens when disks are cosmetically and temporarily disassociated from the virtual machine. This does not affect the backup. All previously selected sources continue to be backed up properly. After a backup, sources again appear to be selected. To resolve this cosmetic issue, expand the virtual machine's node in the inventory tree or in the source selection tree in the backup wizard. The virtual machine's disks re-appear after about a minute and the sources are once again shown as selected.

  • Backing Up A Single Virtual Machine Using Multiple Backup Appliances Creates Errors

    Data Recovery supports managing multiple backup appliances with a single vCenter Server instance. In such a case, only one backup appliance should be configured to back up any virtual machine. Administrators must ensure no virtual machines are backed up by multiple backup appliances. If multiple backup appliances are configured to back up a single virtual machine, unwanted behavior such as snapshot creation and deletion errors may occur.

Recovering from Busy Linux FLR Unmounts

It is possible that Linux FLR may either exit unexpectedly or may not succeed in completing the tasks included in a shutdown. If this occurs, evidence of FLR may be left on the Linux virtual machine on which it was running. Such a situation is not known to cause any issues with the system, but may leave some loop devices, folders, files, and processes on the system that would have been removed by a planned Linux FLR shutdown. While these residual resources typically have little to no impact on the functioning of the effected Linux virtual machine, you may choose to remove them.

The most common reason for Linux FLR failing to close as expected is that one of the mounts is in use. For example, a failure to unmount a mount as expected might produce the following output:

        Restore point has been mounted...
    /EG VC Server/EG Datacenter/host/example-esx.eng.example.com/Resources/Red Hat 5.4 Two LVM Groups"
        root mount point -> "/root/2010-02-19-22.12.10"
Please input "unmount" to terminate application and remove mount point unmount USER PID ACCESS COMMAND /root/2010-02-19-22.12.10/Mount1: root 4137 ..c.. bash
Busy mounts detected, unmount aborted. Please unbusy mounts per list above.
Note, future unmounts can be forced by using "force" as input but will cause unclean unmounts which may require you to manually clean up.
Restore point has been mounted... "/EG VC Server/EG Datacenter/host/example-esx.eng.example.com/Resources/Red Hat 5.4 Two LVM Groups" root mount point -> "/root/2010-02-19-22.12.10"
Please input "unmount" to terminate application and remove mount point

In such a case, it is possible to cleanly close Linux FLR by first ending the use of the mount and then executing an unmount command. Alternatively, the unmounting can be completed using the force command. Using the force command produces output such as the following:

    Please input "unmount" to terminate application and remove mount point
    force
    Removed "/root/2010-02-19-22.12.10/Mount2"
    Removed "/root/2010-02-19-22.12.10/Mount3"
    Removed "/root/2010-02-19-22.12.10/Mount1"
    Removed "/root/2010-02-19-22.12.10"
    umount: /tmp/vmware-root/8027465182774010399_1: device is busy
    umount: /tmp/vmware-root/8027465182774010399_1: device is busy
    umount: /tmp/vmware-root/8027465182774010399_1: device is busy
    umount: /tmp/vmware-root/8027465182774010399_1: device is busy
    terminate called after throwing an instance of 'std::exception'
        what():  St9exception
    Aborted
    

Some of the process of closing Linux FLR completed as expected, but some did not. FLR successfully removed the root mount point, which was in /root/, as well as that mount point's children. Linux FLR was unable to remove /tmp/vmware-root/8027465182774010399_1, as is shown in the output. This mount point continues to service mounts.

For example, files can still be listed and copied from this path as illustrated in the following example:

    [root@office ~]# ll /tmp/vmware-root/8027465182774010399_1
    -rw-r--r-- 1 root root   68663 Aug 18  2009 config-2.6.18-164.el5
    drwxr-xr-x 2 root root    1024 Dec 17 06:00 grub
    -rw------- 1 root root 3307695 Dec 17 06:10 initrd-2.6.18-164.el5.img
    drwx------ 2 root root   12288 Dec 17 05:43 lost+found
    -rw-r--r-- 1 root root  107405 Aug 18  2009 symvers-2.6.18-164.el5.gz
    -rw-r--r-- 1 root root  954947 Aug 18  2009 System.map-2.6.18-164.el5
    -rw-r--r-- 1 root root 1855956 Aug 18  2009 vmlinuz-2.6.18-164.el5
    

Because this mount point is still functioning, the following resources must be available and active:

  • Fuse mount of the partitions
  • Loop device servicing the fuse vmdk flat file local mount
  • Forked FLR engine processing I/O to and from the mount
  • Redo logs

To address these remaining resources, several steps must be completed including:

  • Unmounting the Fuse mounted partition
  • Removing the stale mount
  • Killing any running vdrFileRestore processes
  • Removing any remaining defunct redo logs

Clean up after a busy Linux FLR unmount

  1. Find remaining mount points. For example, you might do this using the ll command and receiving output similar to:
        [root@office ~]# ll /root/2010-02-19-22.12.10/
        drwxr-xr-x 3 root root 1024 Feb 2 13:17 Mount2
        drwxr-xr-x 25 root root 4096 Feb 19 22:06 Mount3
  2. Remove the found mount points. When removing mount points, you must not be in any mount directories with a terminal window, or the commands will fail. This command would be of a form similar to:
  3.     [root@office ~]# umount /root/2010-02-19-22.12.10/Mount2
        [root@office ~]# umount /root/2010-02-19-22.12.10/Mount3
        
  4. Remove the FLR root mount and its children. This command would be of a form similar to:
        [root@office ~]# rm -rf /root/2010-02-19-22.12.10/
  5. Deactivate any Linux FLR LVM groups.

    Note that if no FLR Volume Groups are displayed, you can skip all LVM related steps. Be aware that FLR LVM groups are identified by their unique naming. Only deactivate FLR Volume Groups. Do not deactivate the Systems Volume Groups. Note that the FLR groups are the ones with "FLR" in their names.

    • Issue the lvm vgdisplay command. An example of using that command is:
    •      [root@office ~]# lvm vgdisplay
           --- Volume group ---
           VG Name EG_TEST
      
           --- Volume group ---
           VG Name EG_TEST-flr-4030-rG64au
      
           --- Volume group ---
           VG Name VolGroup00
      
           --- Volume group ---
           VG Name VolGroup00-flr-4030-7Ms42G
              
    • Issue the vgchange command. An example of using that command is:
    •  
           [root@office ~]# vgchange -a n VolGroup00-flr-4030-7Ms42G EG_TEST-flr-4030-rG64au
           0 logical volume(s) in volume group "VolGroup00-flr-4030-7Ms42G" now active
           0 logical volume(s) in volume group "EG_TEST-flr-4030-rG64au" now active
              
    • Find all loop devices, including those used by Linux FLR LVM Volume Groups using the lvm pvscan command. An example of using that command is:
    •      [root@office ~]# lvm pvscan
           PV /dev/sdc1 VG EG_TEST lvm2 [1020.00 MB / 520.00 MB free]
           PV /dev/loop3 VG EG_TEST-flr-4030-rG64au lvm2 [1020.00 MB / 520.00 MB free]
           PV /dev/sda2 VG VolGroup00 lvm2 [7.88 GB / 0 free]
           PV /dev/sdb1 VG VolGroup00 lvm2 [992.00 MB / 992.00 MB free]
           PV /dev/loop1 VG VolGroup00-flr-4030-7Ms42G lvm2 [7.88 GB / 0 free]
           PV /dev/loop2 VG VolGroup00-flr-4030-7Ms42G lvm2 [992.00 MB / 992.00 MB free]
           Total: 6 [19.68 GB] / in use: 6 [19.68 GB] / in no VG: 0 [0 ]
      
  6. Remove the loop devices used by Linux FLR using the losetup command. In the preceding example, /dev/loop1, /dev/loop2, and /dev/loop3 are in use by FLR LVM Volume Groups. As a result, to remove the loop devices for the previous example, the command would be:
        [root@office ~]# losetup -d /dev/loop1
        [root@office ~]# losetup -d /dev/loop2
        [root@office ~]# losetup -d /dev/loop3
  7. Remove all fuse mounts. The command would be of form similar to:
        [root@office ~]# fusermount -u -z /tmp/vmware-<user name|root>/<fuse mount object>
        [root@office ~]# rm -rf /tmp/vmware-<user name|root>/<fuse mount object>
    
  8. Note vmware-<user name|root> is place holder for where fuse mounted the volume. Also note <fuse mount object> is a place holder for each item found in this directory.

  9. Remove all stale redo logs. The command would be of form similar to:
        [root@office ~]# rm -rf /tmp/flr*
  10. Kill FLR processes using a command such as:
        [root@office ~]# killall *drFileRestore