VMware

VMware Data Recovery 2.0.1 Release Notes

VMware Data Recovery 2.0.1 | Build 740238

Last Document Update: 07 Jun 2012

Check frequently for additions and updates to these release notes.

These release notes include the following topics:

Benefits and Features

Read about the benefits and features of this product at VMware Data Recovery Overview - VMware. For additional information about known issues and resolved issues, see:

Supported Environments

For information on supported environments, see the VMware Compatability Guide.

Upgrading to Data Recovery 2.0.1

Previous Data Recovery installations are likely to have existing restore points that should be preserved. To ensure these restore points are preserved, it is important to use the following processes described in this section.

Begin the upgrade process by installing the latest Data Recovery plug-in for the vSphere Client.

To install the latest Data Recovery plug-in

  1. Close the vSphere Client.
  2. Use the Add or Remove Programs item in the Control Panel to uninstall any previous versions of the VMware Data Recovery plug-in.
  3. Start the latest Data Recovery plug-in Windows Installer File (.msi) to install the Data Recovery plug-in.

Next you must deploy the new Data Recovery appliance without deleting existing restore points. If the destination volume for the deduplication store is a virtual disk, do not to delete the appliance. Deleting the appliance deletes the disks connected to the appliance. This would cause the backup data stored in the deduplication store to be deleted. To avoid such an issue, complete the following procedure:

To upgrade Data Recovery appliances with virtual disks or RDMs

  1. IMPORTANT: Before upgrading to VMware Data Recovery 2.0.1, make sure all operations in your current environment have completed before shutting down and performing the upgrade. If an integrity check or reclaim operation is running, allow the operation to complete. Do not CANCEL these operations.
  2. When no operations are running, unmount the destination disk and shut down the Data Recovery appliance.
  3. If you want to save the original Data Recovery appliance, rename it in some way. For example, you might rename an appliance called VMware Data Recovery to VMware Data Recovery - OLD.
  4. Deploy the new appliance.
  5. Use the datastore browser to move the disk containing the deduplication store to the same location as the new appliance.
  6. Edit the settings of the new appliance:
    1. Choose a Add > Hard Disk.
    2. Choose Use an Existing Virtual Disk
    3. Browse to the data store and select the virtual disk which is connected to the older appliance as destination.
    4. Choose the SCSI address.
    5. Choose Finish.
  7. Power on the new appliance.
  8. Edit the settings of the older appliance:
    1. Choose the hard disk which is used to store deduplication store.
    2. Select Remove, leave default option for remove from virtual machine. DO NOT select remove from virtual machine and delete files from the disk.
    3. Click OK.
  9. Configure networking on the new appliance.
  10. Use the Data Recovery vSphere plug-in to connect to the backup appliance.
  11. Complete the getting started wizard. Note that you should mount the desired disk, but do not format it. Formatting the disk will erase all deduplication store data. The disk to be used may not display the expected name, but the proper name will be displayed after the wizard is completed.
  12. You are prompted to restore the configuration from the deduplication store. Select Yes if you want to restore the jobs and backup events and history.
  13. The client disconnects from the appliance after the configuration is restored and then reestablishes the connection. This may take several minutes.
  14. Once the client reconnects, check to see if a Reclaim or Integrity check operation has started. If so, STOP the operation.
  15. Immediately click Configure > Destinations and perform an integrity check on all mounted destinations.
  16. Verify the backup job configuration.
  17. Remove the older VMware Data Recovery appliance from the inventory.

Note that damaged restore points may cause the upgrade to fail. If you receive the message "Could not restore the Data Recovery appliance configuration" re-add the destination to the original Data Recovery appliance and then run an integrity check to clean up any damaged restore points. After the integrity check completes successfully, repeat the upgrade process.

If the destination volumes for the deduplication store is a CIFS share or an RDM, complete the following procedure:

To upgrade Data Recovery appliances with CIFS shares

  1. IMPORTANT: Before upgrading to VMware Data Recovery 2.0.1, make sure all operations in your current environment have completed before shutting down and performing the upgrade. If an integrity check or reclaim operation is running, allow the operation to complete. Do not CANCEL these operations.
  2. When no operations are running, unmount the destination disk and shut down the Data Recovery appliance.
  3. If you want to save the original Data Recovery appliance, rename it in some way. For example, you might rename an appliance called VMware Data Recovery to VMware Data Recovery - OLD.
  4. Delete the older Data Recovery appliance.
  5. Deploy the new appliance.
  6. Power on the new appliance.
  7. Configure networking on the new appliance.
  8. Use the Data Recovery vSphere plug-in to connect to the backup appliance.
  9. Complete the getting started wizard. On the Backup Destination page, click the Add Network Share link and enter the appropriate information for your particular CIFS share.
  10. You are prompted to restore the configuration from the deduplication store. Select Yes if you want to restore the jobs and backup events and history.
  11. The client disconnects from the appliance after the configuration is restored and then reestablishes the connection. This may take several minutes.
  12. Once the client reconnects, check to see if a Reclaim or Integrity check operation has started. If so, STOP the operation.
  13. Immediately click Configure > Destinations and perform an integrity check on all mounted destinations.
  14. Verify the backup job configuration.
  15. Remove the older VMware Data Recovery appliance from the inventory.

Note that damaged restore points may cause the upgrade to fail. If you receive the message "Could not restore the Data Recovery appliance configuration" re-add the destination to the original Data Recovery appliance and then run an integrity check to clean up any damaged restore points. After the integrity check completes successfully, repeat the upgrade process.

Enhancements

VMware Data Recovery 2.0.1 includes repair of known issues (see next section), but no new enhancements.

Known Issues

The following known issues have been discovered through rigorous testing. The list of issues below pertains to this release of Data Recovery only.

  • FLR Fails To Mount Virtual Machines With Upgraded Hardware Versions

    When FLR is used to access a virtual machine, drivers are installed on the physical virtual machine that is accessing the virtual machine. The installed drivers correspond to virtual hardware versions. If virtual machines are upgraded from virtual hardware version 4 to virtual hardware version 7, the virtual machine can no longer be accessed because the drivers are out of date. To resolve this issue, the old drivers must be removed and new ones installed. For more information on virtual hardware versions, see KB 1003746.

  • Email Reports Format Improperly

    In some cases, multiple sections of the email report may be displayed on a single line of text, which may be difficult to read. This typically occurs in Microsoft Outlook. To resolve this issue, use method 1 described in Microsoft's KB 287816.

  • Previous Client Plug-In Versions Interpret Backup Appliances That Require SSL As Being Unresponsive

    By default, Data Recovery 2.0 requires SSL connections. Data Recovery 1.2.1 and earlier are not designed to use SSL. As a result, problems may occur if a 2.0 version of the backup appliance requires SSL, and a user is attempting to connect using an older version of the client plug-in. In such a case, the client plug-in interprets the failure to connect as indicating that the backup appliance is not running or is unresponsive. To resolve this issue, upgrade the client plug-in to the same version as the backup appliance, thereby enabling SSL support.

    Alternately, SSL can be disabled on the backup appliance by modifying settings in the datarecovery.ini file. ConnectionAcceptMode controls how Data Recovery connects over the network. The default ConnectionAcceptMode setting of 1 requires SSL. A setting of 2 requires plaintext. A setting of 3 tries SSL, but allows failback to plaintext.

  • Restore of Virtual Machine Fails When Connected Directly to ESXi Host

    Sometimes you must restore a virtual machine directly to an ESXi host, for example in disaster recovery when ESXi hosts the vCenter Server as a virtual machine. A new vSphere 5 feature tries to prevent this if the ESXi 5.0 host is managed by vCenter. To circumvent this and restore the virtual machine, you must first disassociate the host from vCenter. In earlier releases, vCenter management had less state but was revocable only from vCenter.

    1. Using the vSphere Client, connect directly to the ESXi 5.0 host.
    2. In the Inventory left-hand panel, select the host.
    3. In the right-hand panel, click Summary.
    4. There in the box titled Host Management, click Disassociate host from vCenter Server.
      It is not necessary to put the host in Maintenance Mode.
    5. Once the vCenter Server has been restored and is back in service, use it to reacquire the host.

  • If Using HotAdd Backup, Add SCSI Controllers to Linux Virtual Machines in Numeric Order

    Linux systems lack an interface to report which SCSI controller is assigned to which bus ID, so HotAdd assumes that the unique ID for a SCSI controller corresponds to its bus ID. This assumption could be false. For instance, if the first SCSI controller on a Linux VM is assigned to bus ID 0, but you add a SCSI controller and assign it to bus ID 3, HotAdd advanced transport mode may fail because it expects unique ID 1. To avoid problems, when adding SCSI controllers to a VM, the bus assignment for the controller must be the next available bus number in sequence. Also note that VMware implicitly adds a SCSI controller to a VM if a bus:disk assignment for a newly created virtual disk refers to a controller that does not yet exist. For instance, if disks 0:0 and 0:1 are already in place, adding disk 1:0 is acceptable, but adding disk 3:0 breaks the bus ID sequence, implicitly creating out-of-sequence SCSI controller 3. To avoid HotAdd problems, you should also add virtual disks in numeric sequence.

Resolved Issues

The following issues have been resolved since the last release of Data Recovery. The list of resolved issues below pertains to this release of Data Recovery only.

  • Snapshot Consolidation Fails with hostd Reporting

    The problem resulted in the error, “Failed to consolidate disks in Foundry: Error: (15) The file is already in use.” This issue was caused by premature exit of the vpxa process, so vpxa did not inform the backup code that HotAdd disks should be removed. Stale HotAdd disks remained, and were not recognized the next time the backup software ran. This caused failure of snapshot delete, resulting in a buildup of redo logs. The solution is to wait longer for the vpxa process to report, and apply heuristics to clean up stale HotAdd disks if they appear.

  • VDR Reports Error, “-1115 ( disk full)” Messages when VDR Root File System is Low on Space

    The problem was caused by an erroneous relative path usage within VDR. The fix is to use the full path.

  • Recatalog/Integrity Check Fails Frequently

    Allow the CleanHouse process to clear orphaned data nodes that have data check issues.

  • VDR Email Report Date/Time Incorrect

    This bug was caused by an off-by-one scenario in the code, resulting in the day portion of the date string being replaced with the locale string. For example, when the actual string is supposed to be:

    Date: Mon, 29 Nov 11 1:15:15 +0900
    You might receive the following incorrect string:
    Date: en_US.UTF-8, 27 Nov 11 9:28:08 +0900

  • VDR Virtual Appliance Did Not Have Enough Memory

    VDR 2.0 had the following default memory settings in the virtual appliance:

    • Memory – 2GB
    • Open fd ulimit – 1024
    This was causing file opens to fail and operations to run slower in large backup configurations. In VDR 2.0.1, these settings have been revised to:
    • Memory – 4GB
    • Open fd ulimit – 8192

  • Web Management Interface Is Unavailable

    When network connections are intermittent, vcbAPI was crashing with the following signature:

    		Error info: (null)
    		Signal no: 11 SIGSEGV
    		Error no: 00
    		Sig Code: 01
    		Fault Addr: 0xDC5CEB30 -- "Unable to get symbol for 0x5555dc5ceb30"
    		Thread ID: 0x4CFBD940, Name: Execution thread
    
    		/usr/lib/vmware/datarecovery/libbedrock.so (DebugHandler::walkStack()+0x1b) [0x2b8ff4e7c48b]
    		/usr/lib/vmware/datarecovery/libbedrock.so (DebugHandler::generateExceptionReport(siginfo*)+0x5a) [0x2b8ff4e7c55e]
    		/usr/lib/vmware/datarecovery/libbedrock.so (DebugHandler::SignalHandler(int, siginfo*, void*)+0x35) [0x2b8ff4e65581]
    		/usr/lib/vmware/datarecovery/libbedrock.so (SignalHandler_x(int, siginfo*, void*)+0x18) [0x2b8ff4e6560a]
    		/lib64/libpthread.so.0 [0x2b8ff4614b10]
    		/usr/lib/vmware/datarecovery/libvcbAPI.so (VcbAPI::PropCollWrap::UpdateInfo(char const*)+0x13f) [0x2aaaaecdd5ef]
    		
    This was caused by an attempt to access the Property Collector when the connection is down. This code path was fixed to handle this condition correctly.

  • Multiple Email Reports Sent for Single Incident

    This was caused by erroneous multiple notifications within the VDR. This code path has been fixed.