VMware

VMware Data Recovery 2.0.2 Release Notes

VMware Data Recovery 2.0.2 | Build 1211961

Last Document Update: 22 Aug 2013

Check frequently for additions and updates to these release notes.

These release notes include the following topics:

What's New

VMware Data Recovery 2.0.2 delivers a number of bug fixes that have been documented in the Resolved Issues section, but no new enhancements. However, you can read about the benefits and features of VMware Data Recovery at Features Overview.

Earlier Releases of VMware Data Recovery

Features, resolved issues and known issues from earlier releases of VMware Data Recovery are described in the release notes for each release. To view release notes for earlier releases of VMware Data Recovery, click one of the following links:

Supported Environments

For information on supported environments, see the VMware Compatability Guide.

Upgrading to Data Recovery 2.0.2

Earlier Data Recovery installations are likely to have existing restore points that should be preserved. To ensure these restore points are preserved, perform the following tasks.

Install the latest Data Recovery plug-in

  1. Close the vSphere Client.
  2. Use the Add or Remove Programs item in Control Panel to uninstall all earlier versions of the VMware Data Recovery plug-in.
  3. Start the latest Data Recovery plug-in Windows Installer File (.msi) to install the Data Recovery plug-in.

Deploy the new Data Recovery appliance without deleting existing restore points. If the destination volume for the deduplication store is a virtual disk, do not to delete the appliance. Deleting the appliance deletes the disks connected to the appliance, and causes the backup data stored in the deduplication store to be deleted.

Upgrade Data Recovery appliances with virtual disks or RDMs

Caution: Before upgrading to VMware Data Recovery 2.0.2, make sure all operations in your current environment have completed before shutting down and performing the upgrade. If an integrity check or reclaim operation is running, allow the operation to complete. Do not cancel these operations.
  1. When no operations are running, unmount the destination disk and shut down the Data Recovery appliance.
  2. If you want to save the original Data Recovery appliance, rename it. For example, you might rename an appliance called VMware Data Recovery to VMware Data Recovery - OLD.
  3. Deploy the new appliance.
  4. Use the datastore browser to move the disk containing the deduplication store to the same location as the new appliance.
  5. Edit the settings of the new appliance:
    1. Navigate to Add > Hard Disk and click Use an Existing Virtual Disk
    2. Browse to the data store and select the virtual disk that is connected to the older appliance as the destination.
    3. Select the SCSI address.
    4. Click Finish.
  6. Power on the new appliance.
  7. Edit the settings of the old appliance:
    1. Select the hard disk that is used to store deduplication store.
    2. Click Remove, leave default option for remove from virtual machine.
      Do not select remove from virtual machine and delete files from the disk.
    3. Click OK.
  8. Configure networking on the new appliance.
  9. Use the Data Recovery vSphere plug-in to connect to the backup appliance.
  10. Complete the getting started wizard. You must mount the desired disk, but do not format it. Formatting the disk erases all deduplication store data. The disk to be used might not display the expected name, but the proper name is displayed after the wizard is completed.
  11. When you are prompted to restore the configuration from the deduplication store, click Yes if you want to restore the jobs and backup events and history.
    The client disconnects from the appliance after the configuration is restored and then reestablishes the connection. This might take several minutes.
  12. Once the client reconnects, check whether a Reclaim or Integrity check operation has started, and stop the operation.
  13. Navigate to Configure > Destinations and perform an integrity check on all mounted destinations.
  14. Verify the backup job configuration.
  15. Remove the older VMware Data Recovery appliance from the inventory.

Damaged restore points might cause the upgrade to fail. If you receive the message Could not restore the Data Recovery appliance configuration, re-add the destination to the original Data Recovery appliance and then run an integrity check to clean up any damaged restore points. After the integrity check runs successfully, repeat the upgrade process.

Upgrade Data Recovery appliances with CIFS shares

Caution: Before upgrading to VMware Data Recovery 2.0.2, make sure all operations in your current environment have completed before shutting down and performing the upgrade. If an integrity check or reclaim operation is running, allow the operation to complete. Do not cancel these operations.
  1. When no operations are running, unmount the destination disk and shut down the Data Recovery appliance.
  2. If you want to save the original Data Recovery appliance, rename it. For example, you might rename an appliance called VMware Data Recovery to VMware Data Recovery - OLD.
  3. Deploy and power on the new appliance.
  4. Configure networking on the new appliance.
  5. Use the Data Recovery vSphere plug-in to connect to the backup appliance.
  6. Complete the getting started wizard. On the Backup Destination page, you must click the Add Network Share link and type the appropriate information for your CIFS share.
  7. When you are prompted to restore the configuration from the deduplication store, click Yes if you want to restore the jobs and backup events and history.
    The client disconnects from the appliance after the configuration is restored and then reestablishes the connection. This might take several minutes.
  8. Once the client reconnects, check whether a Reclaim or Integrity check operation has started, and stop the operation.
  9. Navigate to Configure > Destinations and perform an integrity check on all mounted destinations.
  10. Verify the backup job configuration.
  11. Remove the older VMware Data Recovery appliance from the inventory.

Damaged restore points might cause the upgrade to fail. If you receive the message Could not restore the Data Recovery appliance configuration, re-add the destination to the original Data Recovery appliance and then run an integrity check to clean up any damaged restore points. After the integrity check runs successfully, repeat the upgrade process.

Resolved Issues

The following issues have been resolved since the last release of Data Recovery:

  • HotAdd of virtual disk fails with the Essentials Plus license
    During VDR backup, hot addition of virtual disk fails with an error message if you are using the Essentials Plus license.
    This issue has been resolved in this release. The license check has been removed from the HotAdd code path.

  • Virtual machine disks are dropped from the VDR inventory after backup
    When you back up virtual machines by using HotAdd, virtual machine disks of these virtual machines might get dropped from the VDR inventory preventing further backup of these virtual machines.
    This issue has been resolved in this release.

  • Data Recovery plug-in might cause memory leak in vpxd
    When Data Recovery plug-in is enabled, vpxd might leak memory if the connection between vSphere Client and vCenter Server is lost multiple times.
    This issue has been resolved in this release.

  • High network traffic between Data Recovery appliance and vCenter Server
    Host information exchanged between Data Recovery appliance and vCenter Server results in high network traffic.
    This issue has been resolved in this release.

  • Multiple email reports sent for a single event
    Due to a 32-bit timer overflow, the VDR appliance sends multiple email reports for a single event.
    This issue has been resolved in this release.

  • The Data Recovery daemon might fail with segmentation fault during integrity check
    During integrity check, the Data Recovery daemon might fail due to stack exhaustion and might log messages similar to the following to /var/log/messages:
    vdr201 kernel: datarecovery[4020]: segfault at 000000004a1b3fe8 rip 00002aaab5b7831f rsp 000000004a1b3fe0 error 6
    vdr201 VMware[init]: /usr/bin/vmware-watchdog: line 84: 3179 Segmentation fault (core dumped) setsid $CMD >> $OUTPUT 2>&1 < $INPUT
    vdr201 watchdog-datarecovery: '/usr/sbin/datarecovery' exited after 84300 seconds

    This issue has been resolved in this release.

  • Mount state of deduplication store is incorrectly updated after you reset the VDR appliance
    If you unmount a deduplication store, mount it again, and then reset the VDR appliance, the state of the deduplication store is incorrectly updated as unmounted.
    This issue has been resolved in this release.

Known Issues

The following known issues are known to occur in the VMware Data Recovery 2.0.2 release:

  • File Level Restore (FLR) fails to mount virtual machines with upgraded hardware versions
    When FLR is used to access a virtual machine, drivers are installed on the physical virtual machine that is accessing the virtual machine. The installed drivers correspond to virtual hardware versions. If virtual machines are upgraded from virtual hardware version 4 to virtual hardware version 7, the virtual machine can no longer be accessed because the drivers are out of date.
    Workaround: To work around this issue, the old drivers must be removed and new ones installed. For more information about virtual hardware versions, see KB 1003746.

  • Email reports are formatted improperly
    Multiple sections of an email report might be displayed on a single line of text, which might cause difficulty in reading. This typically occurs in Microsoft Outlook.
    Workaround: To work around this issue, disable the feature that removes extra line breaks as described in Microsoft KB 287816.

  • Earlier client plug-in versions interpret backup appliances that require SSL as being unresponsive
    By default, Data Recovery 2.0 requires SSL connections. Data Recovery 1.2.1 and earlier are not designed to use SSL. As a result, problems might occur if a 2.0 version of the backup appliance requires SSL, and you attempt to connect by using an older version of the client plug-in. In such a case, the client plug-in interprets the connection failure as an indicator of the backup appliance not running or being nonresponsive.
    Workaround: To work around this issue, upgrade the client plug-in to the same version as the backup appliance, thereby enabling SSL support.
    Alternatively, SSL can be disabled on the backup appliance by modifying settings in the datarecovery.ini file. ConnectionAcceptMode controls how Data Recovery connects over the network. The default ConnectionAcceptMode setting of 1 requires SSL. A setting of 2 requires plaintext. A setting of 3 tries SSL, but allows failback to plaintext.

  • Attempt to restore virtual machine fails when you connect directly to ESXi host
    Sometimes you must restore a virtual machine directly to an ESXi host, for example in disaster recovery when ESXi hosts the vCenter Server as a virtual machine. A new vSphere 5 feature tries to prevent this if the ESXi 5.0 host is managed by vCenter.
    Workaround: To work around this issue and restore the virtual machine, you must first disassociate the host from vCenter. In earlier releases, vCenter management had less state but was revocable only from vCenter.
    1. Using the vSphere Client, connect directly to the ESXi 5.0 host.
    2. In the Inventory panel, select the host.
    3. In the right-hand panel, click Summary.
    4. Under Host Management, click Disassociate host from vCenter Server.
      You need not put the host in Maintenance Mode.
    5. After the vCenter Server has been restored and is back in service, use it to reacquire the host.

  • Adding SCSI controllers to Linux virtual machines in non-numeric order might cause HotAdd problems
    Linux systems lack an interface to report which SCSI controller is assigned to which bus ID, so HotAdd assumes that the unique ID for a SCSI controller corresponds to its bus ID. This assumption might be false. For instance, if the first SCSI controller on a Linux virtual machine is assigned to bus ID 0, but you add a SCSI controller and assign it to bus ID 3, HotAdd advanced transport mode might fail because it expects unique ID 1.
    Workaround: To work around this issue, when adding SCSI controllers to a virtual machine, the bus assignment for the controller must be the next available bus number in sequence. VMware implicitly adds a SCSI controller to a virtual machine if a bus:disk assignment for a newly created virtual disk refers to a controller that does not yet exist. For instance, if disks 0:0 and 0:1 are already in place, adding disk 1:0 is acceptable, but adding disk 3:0 breaks the bus ID sequence, implicitly creating the out-of-sequence SCSI controller 3. To avoid HotAdd problems, add virtual disks in numeric sequence as well.