vSphere Management Assistant (vMA) 4.0
Release Notes

Released 21-May-2009

Build 161993 is the release build of the vSphere Management Assistant 4.0.

Welcome to the vMA 4.0 release notes. This document contains the following information:

The vSphere Command-Line interface (vSphere CLI) is included with vMA. See the vSphere Command-Line Interface Release Notes for information about vSphere CLI issues.

About vMA

vMA is a virtual machine that includes prepackaged software. Administrators and developers can use vMA to run scripts and agents to manage VMware ESX/ESXi 3.5 Update 2 and later, ESX/ESXi 4.0, and vCenter Server 4.0 systems. vMA includes the vSphere SDK for Perl and the vSphere Command-Line Interface (vSphere CLI). vMA also includes an authentication component (vi-fastpass) and a logging component (vi-logger). vi-fastpass allows direct connection to established target servers without user intervention. vi-logger allows you to collect logs from ESX/ESXi and vCenter Server systems and store the logs on vMA for analysis.

What’s New in vMA 4.0

VIMA 1.0 includes components compatible with VMware Infrastructure 3 only. vMA 4.0 includes components that are compatible with vSphere 4.0 and VMware Infrastructure 3.

VIMA 1.0 vMA 4.0
VMware Remote Command-Line Interface 3.5 U2 vSphere CLI 4.0
VMware Infrastructure Perl Toolkit 1.6 vSphere SDK for Perl 4.0
VMware Tools VMware Tools (most recent version)


vMA includes the following new components and functionality:

  • SMI-S version 1.0.2 vMA supports a secure, auditable CIM SMI-S implementation by allowing client applications to connect to a vCenter Server system and by not requiring authentication credentials for each ESX/ESXi host. vMA includes the VMware implementation of the CIM profiles compatible with SMI-S 1.0.2.
  • vCenter Server targets vMA 4.0 supports vCenter Server system vMA targets. After establishing a vCenter Server system as a vMA target and calling vifpinit, you can run most vSphere CLI commands with --vihost pointing to one of the ESX/ESXi hosts that system manages. No additional authentication is required.
  • Tab completion Linux-style Tab completion is now available in vMA for vMA binaries (in addition to default tab completion). vMA provides completion for vMA commands and for data store names used with vSphere CLI commands.

VIMA 1.0 supports ESX/ESXi 3.5 Update 2 and later. Because vMA 4.0 supports both ESX/ESXi 3.5 Update 2 and later and ESX/ESXi 4.0 targets, you can replace VIMA 1.0 with vMA 4.0. See VMware KB article 1008940 for information on vSphere CLI 4.0 command support in ESX/ESXi 3.5 Update 2 and later.

vMA 4.0 has been tested with up to 100 target servers under normal load conditions.


You can deploy the vMA virtual appliance to an ESX/ESXi 3.5 Update 2 or later or an ESX/ESXi 4.0 system. See the vSphere Management Assistant Guide for installation information.

To set up vMA, you need the following hardware and software:

  • ESX/ESXi host. Because vMA runs a 64-bit operating system, the ESX/ESXi host on which it runs must support 64-bit virtual machines. The host must have one of the following CPUs:
    • For AMD Opteron CPUs, the processor must be Rev E or later. AMD-V hardware virtualization is not required.
    • Intel processors with EM64T support with VT enabled.
    Opteron 64-bit processors earlier than rev E and Intel processors that have EM64T support but not VT support enabled, do not support a 64-bit guest operating system.
  • vSphere Client. You need a vSphere Client for deploying vMA to the ESX/ESXi host.

vMA has the following requirements and restrictions:

  • 512MB of memory is recommended for vMA.
  • 5 GB of storage is required for the vMA virtual disk.
  • By default, vMA uses one virtual processor.

For some information on setting up vMA for a non-English keyboard or a non-default time zone, see (VMware KB article 1007551).

Installing vMA

You can deploy the vMA OVF from your vSphere Client connected to a vCenter Server or ESX/ESXi system, as discussed in the vSphere Management Assistant Guide.

You can download and deploy the vMA ZIP file or deploy from URL.

  • Downloading and deploying the vMA ZIP file
    1. Download the vMA ZIP file and unzip the file.
    2. In the vSphere Client, choose Virtual Appliance > Deploy.
    3. When prompted by the Wizard, click Deploy from File and point to the OVF in the folder to which you extracted the ZIP file
  • Deploying from URL
    1. In the vSphere Client, choose Virtual Appliance > Deploy.
    2. When prompted by the Wizard, click Deploy from URL and enter the following URL:


When you start up vMA, proceed as follows:
  • When prompted, specify custom network settings, or accept the defaults.
  • When prompted, supply a host name for the vMA virtual machine or accept the default.
  • When prompted, specify a password for the vi-admin user for logging in to vMA. The vi-admin user has root privileges on vMA. The root user is disabled.

Important: You cannot use the vima-update utility to update your VIMA 1.0 system to vMA 4.0. Deploy vMA 4.0 instead.

Important: VMware update utilities are intended for updates to shipped products. You cannot use the vima-update utility to update vMA 4.0 releases that were available through the Beta program to the vMA 4.0 release.

Resolved Issues

The following issues that were reported in VIMA 1.0 have been resolved.


  • VIMA 1.0 supports a maximum of 19 target servers. With VIMA 1.0, it was not possible to add more than 19 target servers. Resolved in vMA 4.0.

  • resxtop displays error upon startup. When you run resxtop, it displays an error but works correctly afterwards. Resolved in vMA 4.0.

  • After vi-fastpass has been set up, resxtop does not work correctly with non-target servers. If you set up some target servers for vMA and run resxtop --server myTarget, the command succeeds with a minor error that you can ignore. However, if you attempt to run resxtop on non-target servers using the --server, --username options the command fails. Resolved in vMA 4.0.

  • esxcli does not support vi-fastpass. Even if you set up vMA to use vi-fastpass authentication, esxcli continues to prompt you for a user name and password. Resolved in vMA 4.0.

  • vmware-cmd does not support vi-fastpass. The vmware-cmd vSphere CLI command does not support vi-fastpass. Resolved in vMA 4.0.

  • vima-update help message confusing. The vima-update command supports only the scan and update options. Running vima-update --help lists additional options, which are not supported. Resolved in vMA 4.0.

  • VI Perl Toolkit included in VIMA does not support CIM. VI Perl Toolkit 1.6 includes the Web Services for Management component, which supports CIM. This component is not included with VIMA 1.0. vMA 4.0 does include the vSphere SDK for Perl CIM component. Resolved in vMA 4.0.

  • vima-update uses HTTP. For VIMA 1.0, vima-update supports only the HTTP protocol. vMA 4.0 supports both HTTP and HTTPS. Resolved in vMA 4.0.

Known Issues

The vMA 4.0 release has the following known issues:

  • Cannot move or clone vMA. Moving or cloning vMA is not supported.
    Workaround: No workaround.

  • When var partition fills up, vMA behavior might become unpredictable. When the /var partition on vMA fills up, typically because a large amount of log data has been collected, vMA behavior becomes unpredictable. vMA might exhibit the following symptoms:

    • vilogger commands fail because vilogd is stopped and cannot be restarted.
    • vifp commands display logs on the screen instead of putting the log commands in log files.
    • A call to dmesg | grep vilogd lists segmentation faults for vilogd.
    • A call to vifp removeserver cannot disable logs on the server being removed.
    Workaround: First, check whether you are collecting more information than needed. If you need all log information you are collecting, follow these steps to resolve the issue:
    1. Stop the vmware-vilogd and vmware-vifpd services.
    2. Extend /var or change vilogdefaults.xml so that vilogd starts putting the logs in a location where you have enough space.
    3. Restart the vmware-vilogd and vmware-vifpd services.

  • When vi-user runs the vicfg-snmp vSphere CLI command, incorrect error message results. If you log in to vMA as vi-user and run vicfg-snmp, the following error message results: system does not have SNMP agent configuration supported. This error message is incorrect. It should explain that vi-user does not have privileges for running the command.
    Workaround: There are two workarounds:
    • Log in to vMA as vi-admin to run the command.
    • Log in to a different shell as vi-user and run the command without vi-fastpass by specifying the target using the --server option.

  • vi-user cannot run vifp listservers and vifp help. In VIMA 1.0, a user logged in as vi-user can run any vifp command that does not require sudo. In vMA 4.0, vi-user can run only the vilogger help and vifpinit commands.
    Workaround: No workaround. You can run the commands if you log in as vi-admin.

  • Cannot use --username if vi-fastpass has been set up. You add one or more target servers and initialize vi-fastpass with vifpinit. Then you call a vSphere CLI or vSphere SDK for Perl command specifying --username username. vMA does not prompt you for a password for that user and the command fails with error incorrect username/password.
    Workaround: Run the command without specifying --username.

  • svmotion does not support vi-fastpass. The svmotion commands does not support vi-fastpass.
    Workaround: You can run svmotion using default vSphere CLI authentication options such as user name and password.

  • Poor vSphere Client performance because of vMA log collection. On a vSphere Client connected to an ESX/ESXi host, performance is poor. If vMA is collecting logs from that ESX/ESXi host, check the vMA vilogd log file for authentication failure errors. If there are errors, the authentication credentials for the ESX/ESXi host might have become out of sync with the vMA credentials, and the vMA connection attempts slow down the vSphere Client.
    Workaround: Remove the ESX/ESXi host from vMA using vifp removeserver, then add the server again using vifp addserver.


  • Deploying vMA on ESXi 3.5 Update 2 hosts is slow. When you deploy vMA 4.0 on an ESXi 3.5 Update 2 host, expect slow performance during deployment.

    Workaround: This issue is resolved for later versions of ESX 3.5. You might also consider deploying on an ESX/ESXi 4.0 or on an ESX 3.5 Update 2 host.


  • No error when deploying vMA on unsupported hardware. When you deploy vMA on unsupported hardware, no error results. An error results only when you attempt to start the virtual machine.
    Workaround: Deploy vMA on one of the supported platforms. See Supported Platforms.

  • Cannot disable time synchronization on vMA. When you resume a suspended vMA, reboot vMA, or take a checkpoint, the vMA system time is synchronized to the ESX/ESXi system on which vMA runs, even if tools.syncTime is set to false in the VMX file. The tools.syncTime option controls only whether time is periodically resynchronized while the virtual machine is running.
    Workaround: See VMware KB Article 1189 (http://kb.vmware.com/kb/1189) for some information on disabling time synchronization completely.

  • vilogger and vSphere CLI commands fail because of credential store corruption. If a call to vifp rotatepassword is interrupted, for example, by because network connectivity is lost or the user types Ctrl-C at the command prompt, the local credential store file (vicredentials.xml) might become corrupted.
    Workaround: You can repair a corrupted credential store file as follows:
    1. Edit the credential store file and make the XML well formed. This means you ensure that for every XML element tag (for example <tag42>) there is a corresponding end-tag (for example </tag42>).
    2. Run vifp recoverserver for all target servers. This command recreates the credential store entries for the servers.

    Note: vifp recoverserver is not a fully supported and documented command. Use the command only as instructed by VMware support staff, a VMware Knowledge Base article, or other VMware communication.


  • Incorrect association for RDM snapshots The SMI-S CIM Server that runs inside vMA incorrectly handles a storage configuration that contains an RDM linked to a snapshot of the virtual disk.

    The VMware_StorageExtent instance that represents the LUN of the RDM is linked by a VMware_RDMBasedOn association to the VMware_RDMStorageVolume instance that represents the virtual disk. In the presence of a snapshot, if you traverse the RDMBasedOn association from the StorageExtent instance to the RDMStorageVolume class, the CIM server returns incorrect data. The RDMStorageVolume instance returned by the CIM server incorrectly reports data for the snapshot, instead of data for the original RDM. However, if you traverse the RDMBasedOn association from the RDMStorageVolume instance to the StorageExtent class, the CIM server returns the correct StorageExtent.


  • Issue when running SMI-S using WSMan from vMA when Virtual RDM has snapshots The SMI-S CIM Server that runs in vMA incorrectly handles a storage configuration that contains an RDM linked to a snapshot of the virtual disk. The VMware_StorageExtent instance that represents the LUN of the RDM is linked by a VMware_RDMBasedOn association to the VMware_RDMStorageVolume instance that represents the virtual disk. In the presence of a snapshot, a traversal of the RDMBasedOn association from the RDMStorageVolume to the StorageExtent reports the correct result, but a traversal in the reverse direction reports an incorrect result. When the client traverses RDMBasedOn from the StorageExtent instance to the RDMStorageVolume instance, the CIM server incorrectly returns a reference to the associated SnapshotStorageVolume.
    Workaround: No workaround. However, all association traversals with this incorrect RDMStorageVolume work as if the correct RDMStorageVolume were being used.

  • vMA Linux prompt shows target server after removal. If you add a target server, then call vifpinit, that server becomes the default target and the vMA Linux prompt changes to show that server. When you remove the server, the default target set with the earlier vifpinit does not change and the prompt continues to display it.
    Workaround: You can change the prompt either by logging out from the Linux shell and logging in again, or by calling vifpinit explicitly for another server.

  • Illegal stack size warning in vifp.log. The vifp.log files include the following messages: 'ThreadPool' 46912592633456 warning] Illegal stack size value '256' in configuration file, setting to default = 256. You can ignore these messages
    Workaround: No workaround needed.


Last updated 5-May-2009