VMware

NSX for vSphere 6.1.5 Release Notes

NSX for vSphere 6.1.5 | 1 October 2015 | Build 3102213 | Document updated 22 Nov 2015

What's in the Release Notes

The release notes cover the following topics:

What's New

See what's new and changed in 6.1.5, 6.1.4, 6.1.3, 6.1.2, 6.1.1, 6.1.0.

New in 6.1.5

Changes introduced in NSX vSphere 6.1.5:

  • Controller connectivity status: NSX Manager user interface is enhanced to display the connectivity status between NSX controllers in the controller cluster. This allows you to view connectivity differences between nodes. For example, in a cluster with three controllers, A, B, and C, the control status would allow you to detect a partial cluster partition where A is connected to B and C, but B and C cannot view each other.

New in 6.1.4

Changes introduced in NSX vSphere 6.1.4:

  • Security fixes: This release completes a series of fixes to address the Skip-TLS (CVE-2014-6593), FREAK (CVE-2015-0204), and POODLE (CVE-2014-3566) vulnerabilities, as well as fixes for other issues. See the Resolved Issues section of this document. Please check that any third party components (such as third party partners solutions) support the updated JRE and OpenSSL versions used in NSX.
    For details see:

  • Compatibility with vSphere 6.0: NSX 6.1.4 is compatible with vSphere 6.0. However, the new vSphere features introduced in vSphere 6.0 have not been tested with NSX. These new vSphere features should not be used in environments where NSX is installed, as they are unsupported. For a list of specific NSX limitations with respect to vSphere 6.0, see the VMware Knowledge Base article 2110197.

  • Ability to force publishing of configuration to newly deployed NSX Controllers: NSX vSphere 6.1.4 introduces a Force Sync Controller command to push logical switch information to newly deployed NSX controllers. The following API call to NSX Manager will push the original configuration back to each of the newly deployed controllers.
    Method:

    PUT https://<NSX-Manager-IP>/api/2.0/vdn/controller/synchronize?
  • OSPF on sub-interfaces: OSPF is now supported on sub-interfaces (Fixed Issue 1436023).

  • End of MSRPC support in ALG rules: NSX Edge no longer supports the MSRPC protocol for Application Level Gateway rules on the Edge Firewall.

New in 6.1.3

Changes introduced in NSX vSphere 6.1.3:

  • Dynamic routing protocols are supported on sub-interfaces.
  • ECMP and Logical Firewall are supported at the same time with logical routing.

New in 6.1.2

NSX vSphere 6.1.2 updated NSX Edge and NSX Controllers to OpenSSL 101j and addresses several other CVEs. This release included an updated API call to address the POODLE vulnerability. Using this API call, you can disable SSLv3 on specific NSX Edges in your environment. For more information, see Resolved Issues.

New in 6.1.1

NSX vSphere 6.1.1 provided patches for all NSX appliances to address the BASH Shellshock security vulnerability.

New in 6.1.0

NSX vSphere 6.1 included multiple new features as well as operations, consumption, and hardening enhancements:

  • Highly available NSX Edge clusters with faster uplink speeds: NSX enables you to create highly available and distributed NSX Edge clusters, provides high-bandwidth uplink connection to physical networks, and also ensures active-active redundancy at network virtualization edge. ECMP on NSX Edge allows high-throughput aggregate North-South bandwidth and enables scale-out of the NSX Edge.
  • Enhanced micro-segmentation and firewall operations: NSX 6.1 improves micro-segmentation capabilities by providing improved provisioning, troubleshooting, and monitoring with NSX Distributed and Edge Firewalls. There is a new unified interface for configuring both Distributed and Edge firewalls. Integration of NSX with VMware vRealize Automation 6.1 (vRA 6.1, formerly VMware vCloud Automation Center) allows administrators to integrate their security automation workflows with their compute automation workflows. In addition, NSX 6.1 enables traffic redirection to partner products like next-generation firewalls and intrusion prevention services.
  • Connect multiple data centers or offer hybrid cloud services in Software Defined Data Center (SDDC): Using Layer 2 VPN on NSX Edge, IT teams can migrate workloads, consolidate data centers, or create stretched application tiers across multiple data centers. Service providers can offer tenant onboarding and cloud bursting services where tenant application networks are preserved across data centers without the need for NSX on customer premises.
  • Unified IP Address management across entire data center: With DHCP Relay, you can integrate existing DHCP services available in physical data centers into SDDC. This ensures a consistent IP addressing policy and easy IP management across the data center. NSX vSphere 6.1 supports multiple DHCP servers on a single logical router and allows multiple, existing DHCP servers to be integrated.
  • NSX Load Balancer Enhancement: To allow load balancing of and high availability for more applications hosted in NSX, NSX 6.1.0 introduced UDP load balancing. This allows NSX to provide load balancing for applications such as syslog, NTP, DNS.
  • Use Application Delivery Controllers (ADCs) in a software-defined data center (SDDC) context: NSX 6.1 allows customers using NSX partner ADCs to protect their investment and leverage advanced ADC services from compatible vendors.
  • Advanced host or network security services within SDDC: Enhanced partner integration with the NSX Service Composer supports multiple security services including suite solutions that comprise host versus network based services in a single policy.
  • Dynamic and secure self-service in SDDC: NSX 6.1 with vCloud Automation Center ® helps you optimize resource utilization and scale by dynamically connecting self-service applications to NSX logical networks while ensuring that infrastructure security policies are automatically applied in order to isolate and protect applications. Refer to the VMware vCloud Automation Center Release Notes for feature list.

System Requirements and Installation

The VMware Product Interoperability Matrix provides details about the compatibility of current and previous versions of VMware products and components, such as VMware vCenter Server.

See the notes above regarding Compatibility with vSphere 6.0.

Upgrade Notes

Versions 5.5.x, 6.0.x, 6.1.x can be directly upgraded to 6.1.5. To upgrade NSX vSphere to 6.1.5, perform the following steps:

Before you start upgrading your hosts, make sure you place the hosts maintenance mode. The dataplane for VMs will not work as expected if hosts are moved out of maintenance mode and VMs powered on.

  1. Upgrade NSX Manager and all NSX components to 6.1.5. See the NSX Installation and Upgrade Guide for instructions. If you are upgrading from vCNS 5.5.x, see the Resolved Issue 1429432.

    Note: The steps below show how to upgrade other VMware components. If you do not want to upgrade to vCenter Server 6.0 and ESXi 6.0, you do not need to complete the following steps.

    You must complete the remaining steps in this procedure if you want to upgrade vCenter Server and ESXi.

  2. Upgrade vCenter Server to 6.0. See VMware vSphere 6.0 documentation for information.

  3. To upgrade without any downtime, identify a subset of the hosts in your environment that you can start upgrading.

  4. Place the hosts in maintenance mode.

  5. Upgrade ESXi to 6.0. See VMware vSphere 6.0 documentation for information.
    Depending on the host settings and upgrade method, the hosts are either automatically rebooted or you have to manually reboot them.
    When the hosts restart, NSX Manager pushes the ESXi 6.0 VIBs to the hosts.

  6. When the hosts show reboot required on the Hosts and Clusters tab in the left hand side of the vSphere Web Client, restart the hosts again.
    NSX VIBs for ESXi 6.0 are now enabled.

  7. Take the upgraded hosts out of maintenance mode.

    Note: Do not move the hosts out of maintenance mode before this step.

  8. Repeat steps 4 through 7 for the next subset of hosts until all the hosts in your environment are upgraded.

Note on future upgrades from NSX 6.1.5: Upgrades from NSX 6.1.5 to NSX 6.2.0 are not supported. Instead, you must upgrade from NSX 6.1.5 to NSX 6.2.1 or later.

Known Issues

Known issues are grouped as follows:

Installation and Upgrade Issues

DVPort fails to enable with Would block due to host prep issue
On an NSX-enabled ESXi host, the DVPort fails to enable with "Would block" due to a host preparation issue. When this occurs, the error message first noticed varies (for example, this may be seen as a VTEP creation failure in VC/hostd.log, a DVPort connect failure in vmkernel.log, or a SIOCSIFFLAGS error in the guest). This happens when VIBs are loaded after the DVS properties are pushed by vCenter. This may happen during upgrade.

Workaround: In NSX 6.1.4 and earlier, an additional reboot is required to address this type of DVPort failure in sites using an NSX logical router. In NSX 6.1.5, a mitigation is provided in the NSX software. This mitigation helps avoid a second reboot in the majority of cases. The root cause is a known issue in vSphere. See VMware knowledge base article 2107951 for details.

Unable to load balance virtual servers
When upgrading NSX Edge from 6.1.1 to 6.1.3, if you enable poolside SSL and CA certificate in the application profile associated to a virtual server, NSX Edge automatically attempts to verify the back end poolside certificate. The traffic cannot passthrough back end pool if the configured CA certificates in NSX Edge cannot verify the server certificate. At any point, the back end pool server must have a valid certificate.

Workaround: If you are using NSX 6.1.3, deselect the "Configure Service Certificate" option for pool certificates and CA certificates in poolside certificates configuration. You do not have to perform this step if you using 6.1.1. For 6.1.1, the server certificate will pass the verification even if configured CA cannot verify the server certificate sent by the back end server.

Upgrades from vCNS 5.5.x require additional step
If you plan to upgrade to VMware NSX 6.1.x from VMware vCloud Networking and Security 5.5.x, verify whether uplink port name information is missing from the NSX database.
Workaround: See the VMware Knowledge Base article for instructions.

Rebooting NSX Manager during an upgrade causes inventory sync issue
If NSX Manager is rebooted during an upgrade, the NSX Host Preparation tab will display an "Install" option instead of an "Upgrade" option for prepared clusters. NSX Manager should not be reset while an upgrade is proceeding.

Workaround: Click the Install option to push the NSX VIBs to the hosts, or alternately invoke the following API:
POST https://<vsm-ip>/api/2.0/si/deploy?startTime=<time>&ignoreDependency=<true/false>

Momentary loss of third-party anti-virus protection during upgrade
When upgrading from NSX 6.0.x to NSX 6.1.x or 6.2.0, you might experience momentary loss of third-party anti-virus protection for VMs. This issue does not affect upgrades from NSX 6.1.x to NSX 6.2.

Workaround: None.

Must enable automatic startup on destination host when migrating NSX appliances
On the hosts where NSX Edge appliances and NSX Controllers are first deployed, NSX enables automatic VM startup/shutdown. If the NSX appliance or controller VMs are later migrated to other hosts, the new hosts might not have automatic VM startup/shutdown enabled.

Workaround: When you migrate any NSX appliance or controller to a new cluster, check all hosts in the cluster to make sure that automatic VM startup/shutdown is enabled.

Cannot import firewall rules from an upgraded-from-vCNS NSX installation into a greenfield-installed NSX installation
The application ID for each well known protocol or service (like ICMPv6, for example) is the same across all versions of freshly installed NSX environments. However, in an installation that has been upgraded from vCNS to NSX, the application ID for a given service may not match the application ID that is used for that service on a fresh (greenfield-installed) NSX installation. This is a known behavior since new application IDs have been added to NSX but not to vCNS. As a result, there is the mismatch in the number of application IDs in vCNS and the number of application IDs in NSX. Because of this mismatch, you cannot export firewall (DFW) rules from an upgraded NSX installation and import them into a greenfield-installed NSX installation

Workaround: None.

SSO cannot be reconfigured after upgrade
When the SSO server configured on NSX Manager is the one native on vCenter server, you cannot reconfigure SSO settings on NSX Manager after vCenter Server is upgraded to version 6.0 and NSX Manager is upgraded to version 6.1.3.

Workaround: None.

Service Deployment user interface displays error vAppConfig not present for VM

Workaround: If you see the above error, check for the following:

  1. Deployment of the service virtual machine is complete.
  2. No tasks such as cloning, re-configuring, etc., are in progress for that virtual machine in the vCenter Server task pane.

After steps 1 and 2 are complete, delete the service virtual machine. On the Service Deployment user interface, the deployment is seen as Failed. On clicking the Red icon, an alarm regarding the agent VM not available is displayed for the host. When you resolve the alarm, the virtual machine is redeployed and powered on.

vSphere Web Client does not display Networking and Security tab after backup and restore
When you perform a backup and restore operation after upgrading to NSX vSphere 6.1.3, the vSphere Web Client does not display the Networking and Security tab.

Workaround: When an NSX Manager backup is restored, you are logged out of the NSX Manager virtual appliance management portal. Wait a few minutes before logging in to the vSphere Web Client.

Versioned deployment spec needs to be updated to 6.0.* if using vCenter Server 6.0 and ESX 6.0.
Workarounds: The workaround depends on whether the partner is currently on vCloud Networking and Security or NSX for vSphere.

  • Partners that have NetX solutions registered with vCloud networking and Security must update registration to include a VersionedDeploymentSpec for 6.0.* with the corresponding OVF using REST API calls.
    1. Upgrade vSphere from 5.5 to 6.0.
    2. Add versioned deployment specification for 6.0.x using the following API call:
      POST https://<vCNS-IP>/api/2.0/si/service/<service-id>/service  \
      deploymentspec/versioneddeploymentspec
      <versionedDeploymentSpec> <hostVersion>6.0.x</hostVersion> <ovfUrl>http://sample.com/sample.ovf</ovfUrl> <vmciEnabled>true</vmciEnabled> </versionedDeploymentSpec>

      The URL for the OVF file is provided by the partner.
    3. Update service by using the following REST call
      POST https://<vsm-ip>/api/2.0/si/service/config?action=update
    4. Resolve the EAM alarm by following the steps below.
      1. In vSphere Web Client, click Home and then click Administration.
      2. In Solution, select vCenter Server Extension.
      3. Click vSphere ESX Agent Manager and then click the Manage tab.
      4. On failed agency status, right click and select Resolve All Issues.
  • If partners that have NetX solutions registered with NSX upgrade vSphere to 6.0, they must update registration to include a VersionedDeploymentSpec for 6.0.* with the corresponding OVF. Follow the steps below:

    1. Add versioned deployment specification for 6.0.* using the following steps.
      1. In vSphere Web Client, click Home and then click Networking and Security.
      2. Click Service Definitions and then Click "Service Name".
      3. Click Manage and then click Deployment.
      4. Click + and add ESX versions as 6.0.* and the OVF URL with the corresponding service virtual machine URL.
      5. Click OK.
    2. Resolve the issue by following the steps below.
      1. Click Installation.
      2. Click Service Deployments.
      3. Select the deployment and click Resolve.

After upgrading NSX vSphere from 6.0.7 to 6.1.3, vSphere Web Client crashes on login screen
After upgrading NSX Manager from 6.0.7 to 6.1.3, you will see exceptions displayed on the vSphere Web Client user interface login screen. You will not be able to login and perform operations on either vCenter or NSX Manager.

Workaround: Log into VCVA as root and restart the vSphere Web Client service.

If vCenter is rebooted during NSX vSphere upgrade process, incorrect Cluster Status is displayed
When you do host prep in an environment with multiple NSX prepared clusters during an upgrade and the vCenter Server gets rebooted after at least one cluster has been prepared, the other clusters may show Cluster Status as Not Ready instead of showing an Update link. The hosts in vCenter may also show Reboot Required.

Workaround: Do not reboot vCenter during Host Preparation.

After upgrading from vCloud Networking and Security 5.5.3 to NSX vSphere 6.0.5 or later, NSX Manager does not start up if you are using an SSL certificate with DSA-1024 keysize
SSL certificates with DSA-1024 keysize are not supported in NSX vSphere 6.0.5 onwards, so the upgrade is not successful.

Workaround: Before starting the upgrade, import a new SSL certificate with a keysize of 2048.

NSX Edge upgrade fails if L2 VPN is enabled on the Edge
L2 VPN configuration update from 5.x or 6.0.x to 6.1 is not supported. Hence, Edge upgrade fails if it has L2 VPN configured on it.

Workaround: Delete L2 VPN configuration before upgrading NSX Edge. After the upgrade, re-configure L2 VPN.

SSL VPN does not send upgrade notification to remote client
SSL VPN gateway does not send an upgrade notification to the user. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to the remote user and that they must update their clients.

After upgrading NSX from version 6.0 to 6.0.x or 6.1, NSX Edges are not listed on the user interface
When you upgrade from NSX 6.0 to NSX 6.0.x or 6.1, the vSphere Web Client plug-in may not upgrade correctly. This may result in user interface display issues such as missing NSX Edges.
This issue is not seen if you are upgrading from NSX 6.0.1 or later.

Workaround: None.

vSphere Distributed Switch MTU does not get updated
If you specify an MTU value lower than the MTU of the vSphere distributed switch when preparing a cluster, the vSphere Distributed Switch does not get updated to this value. This is to ensure that existing traffic with the higher frame size isn't unintentionally dropped.

Workaround: Ensure that the MTU you specify when preparing the cluster is higher than or matches the current MTU of the vSphere distributed switch. The minimum required MTU for VXLAN is 1550.

If not all clusters in your environment are prepared, the Upgrade message for Distributed Firewall does not appear on the Host Preparation tab of Installation page
When you prepare clusters for network virtualization, Distributed Firewall is enabled on those clusters. If not all clusters in your environment are prepared, the upgrade message for Distributed Firewall does not appear on the Host Preparation tab.

Workaround: Use the following REST call to upgrade Distributed Firewall:

PUT https://vsm-ip/api/4.0/firewall/globalroot-0/state/vsm-ip

If a service group is modified after the upgrade to add or remove services, these changes are not reflected in the firewall table
User created service groups are expanded in the Edge Firewall table during upgrade - i.e., the Service column in the firewall table displays all services within the service group. If the service group is modified after the upgrade to add or remove services, these changes are not reflected in the firewall table.

Workaround: Create a new service group with a different name and then consume this service group in the firewall rule.

Guest Introspection installation fails with error
When installing Guest Introspection on a cluster, the install fails with the following error:
Invalid format for VIB Module

Workaround: In the vCenter Web Client, navigate to vCenter Home > Hosts and Clusters and reboot the hosts that have Reboot Required next to them in the left hand side inventory.

Service virtual machine deployed using the Service Deployments tab on the Installation page does not get powered on

Workaround: Follow the steps below.

  1. Manually remove the service virtual machine from the ESX Agents resource pool in the cluster.

  2. Click Networking and Security and then click Installation.

  3. Click the Service Deployments tab.

  4. Select the appropriate service and click the Resolve icon.
    The service virtual machine is redeployed.

If a service profile created in 6.0.x is bound to both security group and distributed portgroup or logical switch, Service Composer firewall rules are out of sync after upgrade to NSX 6.1.x
If a service profile binding was done to both security group and distributed portgroup or logical switch in 6.0.x, Service Composer rules are out of sync after the upgrade to 6.1. Rules cannot be published from the Service Composer user interface.

Workaround: Follow the steps below.

  1. Unbind the service profile from the distributed portgroup or logical switch through the Service Definition user interface.
  2. Create a new security group with the required distributed portgroup or logical switch as a member of that security group.
  3. Bind the service profile to the new security group through the Service Definition user interface.
  4. Synchronize the firewall rules through the Service Composer user interface.

General Issues

Distributed logical router fails with Would block error
NSX distributed logical routers may fail after host configuration changes. This occurs when vSphere fails to create a required VDR port on the host. This error may be seen as a DVPort connect failure in vmkernel.log, or a SIOCSIFFLAGS error in the guest. This can happen when VIBs are loaded after the DVS properties are pushed by vCenter.

Workaround: See VMware knowledge base article 2107951.

NSX has a URL length limit length of 16000 characters if users want to assign a single security tag to x VMs at a time in one API call
Users cannot assign a single security tag to x number of VMs at a time using one API call, if the URL length is more than 16,000 characters. The URL length must have a maximum of 16,000 characters.

Workaround:

  1. The URL length should be less than 16,000 characters.

  2. Performance is optimized when approximately 500 VMs are being tagged in a single call. Tagging more VMs in a single call may result in degraded performance.

ESXi hosts do not learn the list of available NSX controllers
To verify this issue is occurring, check the hosts for the existence of the config-by-vsm.xml file. A file with the controller IP addresses will exist under normal conditions. This issue may result from a temporary message bus failure.

Workaround: You should look for any password failure message and resync the message bus. Use the following REST API to resync the message bus:

/api/2.0/nwfabric/configure?action=synchronize
[edit] http method
POST
[edit] Body
<nwFabricFeatureConfig>
  <featureId>com.vmware.vshield.vsm.messagingInfra</featureId>
  <resourceConfig>
   <resourceId<{HOST/CLUSTER MOID}</resourceId>
   </resourceConfig>
</nwFabricFeatureConfig>

vCNS - SSL VPN-Plus support on Windows 10
The SSL VPN portal does not open in Internet Explorer/Edge browser on Windows 10, however, it works for other browsers.

Workaround: SSL VPN PHAT clients work with Windows 10.

Security policy name does not allow more than 229 characters
The security policy name field in the Security Policy tab of Service Composer can accept up to 229 characters. This is because policy names are prepended internally with a prefix.

Workaround: None.

Unable to power on guest virtual machine
When you power on a guest virtual machine, the error All required agent virtual machines are not currently deployed may be displayed.

Workaround: Follow the steps below.

  1. In the vSphere Web Client, click Home and then click Administration.

  2. In Solution, select vCenter Server Extension.

  3. Click vSphere ESX Agent Manager and then click the Manage tab.

  4. Click Resolve.

NSX has a URL length limit length of 16000 characters if users want to assign a single security tag to x VMs at a time in 1 API call
You cannot assign a single security tag to x number of VMs at a time using one API call, if the URL length is more than 16,000 characters. The URL length can have a maximum 16,000 characters.

Workaround:

  1. The URL length should be less than 16,000 characters.
  2. Performance is optimized when approximately 500 VMs are being tagged in a single call. Tagging more VMs in a single call may result in degraded performance.

With NSX installed on vCSA 6.0, OS-local user cannot see NSX in vSphere Web Client
If you have installed NSX on vCSA 6.0, and enterprise authenticated vSphere user can see the NSX Networking and Security tab in the vSphere web client, but an OS-local vSphere user cannot.

Workaround: Perform the following steps:

  1. Login to the vSphere Web Client as an administrator@vsphere.local user.
  2. Navigate to Administration > Users and Groups > Groups tab and select the administrator group under vsphere.local domain.
  3. Click the Add Member button on the Group Members grid and add root user to this group.
  4. Navigate to NSX Manager and click Networking and Security > NSX Manager > Manage > Users tab and click the Add user button (+).
  5. Assign root user role and click the Finish button.
  6. Logout and log back in back as a root user.

NSX Manager Issues

NSX Manager flow monitoring may fail to display flows for time interval of 'Last 15 Minutes' or 'Last 1 hour'

In NSX installations with a high volume of traffic, the NSX Manager flow monitoring user interface displays the message, 'No Flow Records' when the time interval selected is 'Last 15 Minutes' or 'Last 1 hour'. This occurs when flow records are not being refreshed at a rate sufficient to ensure the latest flow records are available.

Workaround: For high-traffic installations, VMware suggests that customers use either IPFix or logging to a remote syslog server (for example, using VMware LogInsight) to gather flows.

NSX manager automatically restarts if too many draft Service Composer workflows accumulate
If there are a large number of objects in source/destination/applied to field in your firewall rules, and if NSX has auto-saved many Service Composer workflows, NSX Mananger may restart unexpectedly. This can also occur if you are using Service Composer to apply policies to a large number of Service Groups, and NSX has auto-saved many Service Composer workflows.

Workaround: If your installation fits the description above, and you encounter unexpected NSX Manager restarts, contact VMware support for assistance deleting saved Service Composer draft workflows.

After NSX Manager backup is restored, REST call shows status of fabric feature "com.vmware.vshield.vsm.messagingInfra" as "Red"
When you restore the backup of an NSX Manager and check the status of fabric feature "com.vmware.vshield.vsm.messagingInfra" using a REST API call, it is displayed as "Red" instead of "Green".

Workaround: Use the following REST API call to reset communication between NSX Manager and a single host or all hosts in a cluster.
POST https://<nsxmgr-ip>/api/2.0/nwfabric/configure?action=synchronize
<nwFabricFeatureConfig>
<featureId>com.vmware.vshield.vsm.messagingInfra</featureId>
<resourceConfig>
    <resourceId>HOST/CLUSTER MOID</resourceId>
</resourceConfig>
</nwFabricFeatureConfig>

Networking and Security Tab not displayed in vSphere Web Client
After vSphere is upgraded to 6.0, you cannot see the Networking and Security Tab when you log in to the vSphere Web Client with the root user name.

Workaround: Log in as administrator@vsphere.local or as any other vCenter user which existed on vCenter Server before the upgrade and whose role was defined in NSX Manager.

Cannot remove and re-add a host to a cluster protected by Guest Introspection and third-party security solutions
If you remove a host from a cluster protected by Guest Introspection and third-party security solutions by disconnecting it and then removing it from vCenter Server, you may experience problems if you try to re-add the same host to the same cluster.

Workaround: To remove a host from a protected cluster, first put the host in maintenance mode. Next, move the host into an unprotected cluster or outside all clusters and then disconnect and remove the host.

vMotion of NSX Manager may display error Virtual ethernet card Network adapter 1 is not supported
You can ignore this error. Networking will work correctly after vMotion.

NSX Edge and Logical Routing Issues

When a BGP neighbor filter rule is modified, the existing filters may not be applied for up to 40 seconds
When BGP filters are applied to an NSX Edge running IBGP, it may take up to 40 seconds for the filters to be applied on the IBGP session. During this time, NSX Edge may advertise routes which are denied in the BGP filter for the IBGP peer.

Workaround: None.

After enabling ECMP on a Logical Router, northbound Edge does not receive prefixes from the Logical Router

Workaround: Follow the steps below:

  1. Disable ECMP on the Logical Router.
  2. Disable OSPF.
  3. Enable ECMP.
  4. Enable OSPF.

If Certificate Authentication is enabled under Authentication configuration of SSL VPN-Plus service, connection to the SSL VPN server from an older version of Windows client fails
If Certificate Authentication is enabled, the TLS handshake between an older version of Windows client and the latest version of SSL VPN fails. This prevents the Windows client from connecting to SSL VPN. This issue does not occur for Linux & Mac clients or for a browser based connection to SSL VPN.

Workaround: Upgrade the Windows client to the latest version i.e. 6.1.4.

Upgrade of standalone NSX Edge as L2 VPN client is not supported

Workaround: You must deploy a new standalone Edge OVF and reconfigure the appliance settings.

When the direct aggregate network in local and remote subnet of an IPsec VPN channel is removed, the aggregate route to the indirect subnets of the peer Edge also disappear
When there is no default gateway on Edge and you remove all of the direct connect subnets in local subnets and part of the remote subnets at the same time when configuring IPsec, the remaining peer subnets become unreachable by IPsec VPN.

Workaround: Disable and re-enable IPsec VPN on NSX Edge.

SSL VPN Plus logon/logoff script modify not working
The modify script is correctly reflected on the vSphere Web Client but not on the gateway.

Workaround: Delete the original script and add again.

Adding a route that is learned through protocol as connected results in the local Forwarding Information Base (FIB) table showing both connected and dynamically learned routes
If you add a route already learned through protocol as connected, the local FIB shows both connected and dynamically learned routes. The dynamically learned route is shown as preferred over the route directly connected.

Workaround: Withdraw the learned route from the route advertisement so that it gets deleted from the FIB table and configure only the connected route.

If an NSX Edge virtual machine with one sub interface backed by a logical switch is deleted through the vCenter Web Client user interface, data path may not work for a new virtual machine that connects to the same port
When the Edge virtual machine is deleted through the vCenter Web Client user interface (and not from NSX Manager), the vxlan trunk configured on dvPort over opaque channel does not get reset. This is because trunk configuration is managed by NSX Manager.
 
Workaround: Manually delete the vxlan trunk configuration by following the steps below:

  1. Navigate to the vCenter Managed Object Browser by typing the following in a browser window:
    https://vc-ip/mob?vmodl=1
  2. Click Content.
  3. Retrieve the dvsUuid value by following the steps below.
    1. Click the rootFolder link (for example, group-d1(Datacenters)).
    2. Click the data center name link (for example, datacenter-1).
    3. Click the networkFolder link (for example, group-n6).
    4. Click the DVS name link (for example, dvs-1)
    5. Copy the value of uuid.
  4. Click DVSManager and then click updateOpaqueDataEx.
  5. In selectionSet, add the following XML
    <selectionSet xsi:type="DVPortSelection">
        <dvsUuid>value</dvsUuid>
        <portKey>value</portKey> <!--port number of the DVPG where trunk vnic got connected-->
    </selectionSet>
  6. In opaqueDataSpec, add the following XML
    <opaqueDataSpec>
        <operation>remove</operation>
        <opaqueData>
          <key>com.vmware.net.vxlan.trunkcfg</key>
          <opaqueData></opaqueData>
        </opaqueData>
    </opaqueDataSpec>
  7. Set isRuntime to false.
  8. Click Invoke Method.
  9. Repeat steps 5 through 8 for each trunk port configured on the deleted Edge virtual machine.

When Default Originate is enabled, BGP filter for deny default route does not get applied
When BGP Default Originate is enabled on an NSX Edge, a default route gets advertised to all BGP neighbors unconditionally. If you do not want a BGP neighbor to install the default route advertised by this BGP speaker, you must configure an inbound policy on that BGP neighbor to reject the default route.
 
Workaround: Configure an inbound policy on the appropriate BGP neighbor to reject the default route.

Cannot add non-ASCII characters in bridge or tenant name for Logical Router
NSX controller APIs do not support non-ASCII characters.
 
Workaround: Use ASCII characters in bridge and tenant names. You can then edit the names to include non-ASCII characters.

SNAT and Load Balancer (with L4 SNAT) configured on a sub interface do not work
SNAT rule configuration passes on NSX Edge but the data path for the rule does not work due to RP Filter checks.
 
Workaround: Contact VMware support for help in relaxing the RP filter check on NSX Edge.

When egress optimization is enabled for L2 VPN, load balancers with pool members stretched across site are shown as down
With egress optimization, both L2 VPN client and server have the same internal IP address. Because of this, any packet from a pool member to the load balancer does not reach NSX Edge.
 
Workaround: Do one of the following.

  • Disable egress optimization.
  • Assign an IP address for load balancer that is different from the egress-optimized IP address.

Static routes do not get pushed to hosts when a next hop address is not specified
The user interface allows you to create a static route on an NSX Edge device without specifying a next hop address. If you do not specify a next hop address, the static route does not get pushed to hosts.
 
Workaround: Always specify a next hop address for static routes.

Cannot configure NSX firewall using security groups or other grouping objects defined at global scope
Administrator users defined at the NSX Edge scope cannot access objects defined at the global scope. For example, if user abc is defined at Edge scope and security group sg-1 is defined at global scope, then abc will not be able to use sg-1 in firewall configuration on the NSX Edge.

Workaround: The administrator must use grouping objects defined at NSX Edge scope only, or must create a copy of the global scope objects at the NSX Edge scope.

Logical Router LIF routes are advertised by upstream Edge Services Gateway even if Logical Router OSPF is disabled
Upstream Edge Services Gateway will continue to advertise OSPF external LSAs learned from Logical Router connected interfaces even when Logical Router OSPF is disabled.

Workaround: Disable redistribution of connected routes into OSPF manually and publish before disabling OSPF protocol. This ensures that routes are properly withdrawn.

When HA is enabled on Edge Services Gateway, OSPF hello and dead interval configured to values other than 30 seconds and 120 seconds respectively can cause some traffic loss during failover
When the primary NSX Edge fails with OSPF running and HA enabled, the time required for standby to take over exceeds the graceful restart timeout and results in OSPF neighbors removing learned routes from their Forwarding Information Base (FIB) table. This results in dataplane outage until OSPF re converges.

Workaround: Set the default hello and dead interval timeouts on all neighboring routers to 30 seconds for hello interval and 120 seconds for dead interval. This enables graceful failover without traffic loss.

The user interface allows you to add multiple IP addresses to a Logical Router interface though it is not supported
This release does not support multiple IP addresses for a logical router interface.

Workaround: None.

SSL VPN does not support Certificate Revocation Lists (CRL)
A CRL can be added to NSX Edge, but this CRL is not consumed by SSL VPN.

Workaround: CRL is not supported, but you can enable user authentication with client certificate authentication.

Must use IP address, not hostname, to add an external authentication server to SSL VPN-Plus
You cannot use the FQDN or hostname of the external authentication server.

Workaround: You must use the IP address of the external authentication server.

Firewall Issues

Service Composer created firewall rules are automatically disabled when referenced services are force deleted
When you force delete services that are referenced in Service Composer rules, you will notice that the rules referencing deleted services are automatically disabled without any warning or user-alert. This is done to restrict Service Composer from getting out of sync.

Workaround: You must fix the firewall rule in security policy that has the invalid service. Make sure you do not force delete services and security groups that are used in firewall rules for Service Composer.

Adding a new VM to logical switch does not automatically publish NSX Edge firewall rules
When you add new VMs to a logical switch, you might notice that the NSX Edge firewall rules are not published automatically. You might have to manually republish the firewall rules so that the new VMs are applied with the correct rule.

Workaround: Place each logical switch in a security group. If the firewall rules are defined using the securityGroup with logical switch as a member of it, and the new VM is added to that logical switch, the firewall rules associated are automatically updated to include the new VM ipAddress. You do not need to re-deploy the firewall rules.

Firewall rule republish fails after DELETE API is used
If you delete the entire firewall configuration through the DELETE API method and then try to republish all the rules from a previously saved firewall rules draft, then the rule publish will fail.

Workaround: None.

Some versions of Palo Alto Networks VM-Series do not work with NSX Manager default settings
Some NSX 6.1.4 components disable SSLv3 by default. Before you upgrade, please check that all third-party solutions integrated with your NSX deployment do not rely on SSLv3 communication. For example, some versions of the Palo Alto Networks VM-series solution require support for SSLv3, so please check with your vendors for their version requirements.

User interface shows error Firewall Publish Failed despite successful publish
If distributed firewall is enabled on a subset of clusters in your environment and you update an application group that is used in one or more active firewall rules, any publish action on the user interface will display an error message containing IDs of the hosts belonging to the clusters where NSX firewall is not enabled.
Despite error messages, rules will be successfully published and enforced on the hosts where Distributed Firewall is enabled.
 
Workaround: Contact VMware support to clear the user interface messages.

If you delete the firewall configuration using a REST API call, you cannot load and publish saved configurations
When you delete the firewall configuration, a new default section is created with a new section ID. When you load a saved draft (that has the same section name but an older section ID), section names conflict and display the following error:
Duplicate key value violates unique constraint firewall_section_name_key
 
Workaround: Do one of the following:

  • Rename the current default firewall section after loading a saved configuration.
  • Rename the default section on a loaded saved configuration before publishing it.

When IPFIX configuration is enabled for Distributed Firewall, firewall ports in the ESXi management interface for NetFlow for vDS or SNMP may be removed
When a collector IP and port is defined for IPFIX, the firewall for ESXi management interface is opened up in the outbound direction for the specified UDP collector ports. This operation may remove the dynamic ruleset configuration on ESXi management interface firewall for the following services if they were previously configured on the ESXi host:

  • Netflow collector port configuration on vDS
  • SNMP target port configuration

Workaround: To add the dynamic ruleset rules back, you must refresh the netFlow settings for vDS in the vCenter Web Client. You must also add the snmp target again using esxcli system snmp commands. This will need to be repeated if the ESXi host is rebooted after IPFIX configuration is enabled or if the esx-vsip VIB is uninstalled from the host.

Logical Switch Issues

Creating a large number of logical switches with high concurrency using an API call may result in some failures
This issue may occur if you create a large number of logical switches using the following API call:

POST https://<nsxmgr-ip>/api/2.0/vdn/scopes/scopeID/virtualwires

Some logical switches may not be created.
 
Workaround: Re-run the API call.

Service Deployment Issues

Data Security service status is shown as UP even when IP connectivity is not established
Data Security appliance may have not received the IP address from DHCP or is connected to an incorrect port group.
 
Workaround: Ensure that the Data Security appliance gets the IP from DHCP/IP Pool and is reachable from the management network. Check if the ping to the Data Security appliance is successful from NSX/ESX.

Old service virtual machines not functioning
Old service virtual machines that were left behind on hosts during host removal from the vCenter Server remain disconnected and are unable to function when the host is added back to the same vCenter Server.
 
Workaround: Follow the steps below:

  1. Move the host from the protected cluster to either an unprotected cluster or outside all clusters. This will uninstall the service virtual machines from the host.
  2. Remove the host from the vCenter Server.

Service Insertion Issues

Deleting security rules via REST displays error
If a REST API call is used to delete security rules created by Service Composer, the corresponding rule set is not actually deleted in the service profile cache resulting in an ObjectNotFoundException error.
 
Workaround: None.

Security policy configured as a port range causes firewall to go out of sync
Configuring security policies as a port range (for example, "5900-5964") will cause the firewall to go out of sync with a NumberFormatException error.
 
Workaround: You must configure firewall security policies as a comma-separated protocol port list.

Resolved Issues

See what's resolved in 6.1.5, 6.1.4, 6.1.3, 6.1.2, 6.1.1, 6.1.0.

The following issues have been resolved in the 6.1.5 release:

  • Copy of VM via vCloud Connector fails when route traverses NSX Load Balancer

    This has been fixed in NSX 6.1.5.

  • ESXi host might lose network connectivity
    An ESXi host might lose network connectivity and experience stability issues when multiple error messages similar to the following are logged in:
    WARNING: Heartbeat: 785: PCPU 63 didn't have a heartbeat for 7 seconds; *may* be locked up.

    This has been fixed in NSX 6.1.5.

  • Control plane connectivity fails for NSX Controller
    Control plane connectivity was seen to fail for a Controller, showing an error in netcpa related to txInProgress.

    This has been fixed in NSX 6.1.5.

  • NSX manager web client displays error: Code 301002
    Description: When you navigate to NSX manager > Monitor > System Events, the web client displays the following message: Filter config not applied to vnic. Code 301002.

    This has been fixed in NSX 6.1.5.

  • After NSX upgrade, guest introspection fails to communicate with NSX Manager
    After upgrading from NSX 6.0.x to NSX 6.1.x or from NSX 6.0.x to NSX 6.2 and before the guest introspection service is upgraded, the NSX Manager cannot communicate with the Guest Introspection USVM.

    This has been fixed in NSX 6.1.5.

  • VMs disconnect during vMotion
    VMs disconnect during vMotion on 6.0.8 with message, VISP heap depleted.

    This has been fixed in NSX 6.1.5.

  • LDAP Domain Objects take too long to return or fail to return in Security Group Object Selection screen

    This has been fixed in NSX 6.1.5.

  • Delayed mouse movement when viewing FW rules
    In NSX Networking and Security section of vSphere Web Client, moving the mouse over rows in the Firewall Rules display results in a 3 second delay each time the mouse is moved.

    This has been fixed in NSX 6.1.5.

  • Some IP Spoofguard rules in NSX-v are not applied correctly
    Some IP Spoofguard rules in NSX-v are not applied correctly. Instance is not present in the Security Group in NSX-v and needs to be manually added to the security group

    This has been fixed in NSX 6.1.5.

  • In VIO Deployment, some newly deployed VMs appear to have valid port and IPs assigned but do not have access to the network

    This has been fixed in NSX 6.1.5.

  • One of the NSX Controllers does not hand over master role to other controllers when it is shut down
    Typically, when an NSX Controller assumes operations master role and is preparing to shut down, it automatically hands over the master role to other controllers. In this error case, the controller fails to hand over the role to other controllers, its status becomes interrupted, and it goes into disconnected mode.

    This has been fixed in NSX 6.1.5.

  • Unable to register NSX Manager 6.1.4 with vCenter, gives error: NSX Management Service operation failed

    This has been fixed in NSX 6.1.5.

  • Cannot redeploy NSX Edge with L2VPN Service configured with CA-signed certificate
    Cannot redeploy or change size of NSX Edge with L2VPN Service configured with CA-signed or self-signed certificate.

    This has been fixed in NSX 6.1.5.

  • Bulk deletion in Service Composer user interface generates "between 0 to 0" message
    Bulk deletion of policies (~100) from the NSX Service Composer user interface generates a message, "It should be between 0 to 0". You may safely ignore this message.

    This has been fixed in NSX 6.1.5.

  • Slow login to NSX tab of vSphere web client with AD-backed SSO
    In NSX for vSphere installations that use SSO for AD authentication, the user's initial login to the NSX Networking and Security section of the vSphere Web Client takes a long time.

    This has been fixed in NSX 6.1.5.

  • Background operation for Policy deletion may take long time with high CPU utilization
    Deletion of a policy reevaluates all the remaining policies in background. This may take more than an hour on setups having large number of policies, large number of security groups, and/or large number of rules per policy.

    This has been fixed in NSX 6.1.5 and 6.2.

  • NSX Manager CPU utilization is high after adding it to Active Directory domain
    NSX Manager CPU utilization is high after adding it to Active Directory domain. In the system logs of the NSX Manager, multiple Postgres threads are seen as running.

    This has been fixed in NSX 6.1.5.

  • Connectivity loss after removing a logical interface (LIF) in installations with dynamic routing
    A problem was identified in the NSX Logical Router (Edge/DLR) when using dynamic routing (OSPF & BGP) that will cause network connectivity loss after removing a LIF. This affects NSX versions 6.0.x through 6.1.4.

    In NSX installations that use dynamic routing, each LIF has a redistribution rule index ID associated with it. When a user deletes a LIF in such installations, the index IDs assigned to some active LIFs may change. This index modification can result in a temporary loss of network connectivity for LIFs whose index IDs are changed. If the LIF deletion is serialized, you will see 5-30 seconds of disruption on affected LIFs after each LIF deletion. If the LIF deletion is done in bulk, you will see a total of 5-30 seconds of disruption on affected LIFs.

    This has been fixed in NSX 6.1.5.

  • All queued publishable tasks are marked as failed after the default timeout of 20 minutes
    Queues are maintained per NSX Edge and can publish in parallel for different Edges. The queued up publishable tasks are executed sequentially where each task takes approximately 3-4 seconds, and 300-400 tasks are completed in 20 minutes. In situations where more than 400 publish tasks for an Edge are queued up in a short time and have exceeded the publish timeout limit of 20 minutes while waiting for execution, the tasks are automatically marked as failed. NSX Manager responds to the failure by reverting to the last known successful configuration where publication to the Edge has succeeded. Applications or plugins that are sending configuration updates for an Edge to the NSX Manager in a burst mode need to monitor the success and failure status of the task using the associated job id.

    This has been fixed in NSX 6.1.5.

  • The message bus randomly does not start after NSX Edge reboot
    After restarting an Edge VM, the message bus often does not start after powering on, and an additional reboot is required.

    This has been fixed in NSX 6.1.5 and NSX 6.2.0.

  • NSX Manager is non-functional after running the 'write erase' command
    When you restart the NSX Manager after running the 'write erase' command, you might notice that the NSX Manager may fail to operate as expected. For example, the setup command may be missing from the NSX CLI.

    This has been fixed in NSX 6.1.5 and NSX 6.2.0.

  • Syslog shows host name of backed up NSX Manager on the restored NSX Manager
    Suppose the host name of the first NSX Manager is A and a backup is created for that NSX Manager. Now a second NSX Manager is installed and configured to the same IP address as the old Manager according to backup-restore docs, but host name is B. Restore is run on this NSX Manager. The restored NSX Manager shows host name A just after restore and host name B again after reboot.

    This has been fixed in NSX 6.1.5 and NSX 6.2.0.

The following issues were resolved in the 6.1.4 release:

  • In VXLAN configurations with NIC Teaming on hosts where the DVS switch has 4 physical NIC (PNIC) uplinks, only one of 4 VMKNICs gets the IP address
    In NSX 6.1.3, when you configure VXLAN with a teaming source port id where the DVS has 4 physical NIC (PNIC) uplinks, NSX creates 4 VMKNICs on the host. In teaming mode, all four VMKNICs should be assigned the IP address, but due to an issue in NSX 6.1.3, only one VMKNIC on each host is assigned the IP address.

    This has been fixed in NSX 6.1.4.

  • NSX Manager becomes non-responsive
    NSX Manager becomes non-responsive when it fails to recognize the network adapter. This fix replaces the e1000 network adapter of the vCNS Manager Appliance with a vmxnet3 adapter. In new installations of vCNS 5.5.4.1 and later, this fix is automatically applied. If you are upgrading from vCNS 5.5.x to NSX 6.1.4 or later, you must manually apply the fix as explained in VMware Knowledge Base article 2115459.

    This has been fixed in NSX 6.1.4.

  • Fixed Issue 1443458: In installations with multiple vSphere clusters, hosts may disappear from the installation tab
    In installations with multiple vSphere clusters, the host preparation screen in NSX Manager may take approximately 3 minutes to load all clusters. The hosts may disappear from the window temporarily.

    This has been fixed in NSX 6.1.4.

  • Fixed Issue 1424992: NSX Networking and Security tab of vSphere Web Client may show data service timeout error when using a large AD store
    In NSX installations with vSphere 6.0, the NSX Networking and Security tab of the vSphere Web Client may show a data service timeout error when using a large AD store. There is no workaround. Reload the Web Client in your browser.

    This has been fixed in NSX 6.1.4.

  • Virtual machines connected to a VMware NSX for vSphere Logical Switch and a Distributed Router experience very low bandwidth/throughput
    This fix addresses the issue covered by VMware Knowledge Base article 2110598.

    This has been fixed in NSX 6.1.5.

  • Fixed Issue 1421287: L2VPN goes down after pinging broadcast IP. tap0 interface goes down on standalone Edge.
    When pinging the broadcast address, the MAC addresses are learned but the L2VPN tunnel goes down. The tap0 interface goes down after the Edge learns a lot of MAC addresses.

    This has been fixed in NSX 6.1.4.

  • High CPU usage at NSX Manager during vCenter inventory updates
    Provisioning firewall rules with a large number of security groups requires a large number of connections to the internal Postgres database. Simultaneously running CPU threads can lead to a long period of high CPU usage on the NSX Manager server.

    This has been fixed in NSX 6.1.4.

  • Long NSX Manager load times on large domain accounts
    When domain users with larger number of groups login to the vSphere Web Client, it takes an extremely long time to access the NSX Manager interface.

    This has been fixed in NSX 6.1.4.

  • Fixes to address CVE-2014-6593 "Skip-TLS" and CVE-2015-0204 "FREAK" vulnerabilities (collectively, "SMACK" vulnerabilities)
    This fix addresses the issues generally known as the "SMACK" vulnerabilities. This includes the "FREAK" vulnerability, which affects OpenSSL based clients by allowing them to be fooled into using export grade cipher suites. SSL VPN clients have been updated with OpenSSL version 1.0.1L to address this. OpenSSL on the NSX Edge has been updated to version 1.0.1L as well.
     
    This fix also addresses the "Skip-TLS" vulnerability. The Oracle (Sun) JRE package is updated to 1.7.0_75 (version 1.7.0 update 75), because Skip-TLS affected Java versions prior to update 75. Oracle has documented the CVE identifiers that are addressed in JRE 1.7.0_75 in the Oracle Java SE Critical Patch Update Advisory for January 2015.

  • Fixes to address CVE-2014-3566 "POODLE" vulnerability
    The 6.1.4 release included changes that address the CVE-2014-3566 vulnerability (the SSLv3 vulnerability known as "POODLE"). The changes included:
    • The disabling of SSLv3 by default on NSX Manager (since 6.1.2 release). To re-enable SSLv3 support on this component, contact VMware support.
    • The disabling of SSLv3 by default on the NSX Edge Load Balancer (since 6.1.4 release). To re-enable SSLv3 support on this component, see VMware Knowledge Base article 2116104.
    • A new API method that allows you to disable and re-enable SSLv3 on the NSX Edge SSL VPN (since 6.1.4 release). To disable and re-enable SSLv3 support on this component, see VMware Knowledge Base article 2115871.
    • An update of the NSX Edge system SSL library to OpenSSL 0.9.8zc.

The following issues were resolved in the 6.1.3 release:

  • VMs lose connectivity to networks that are connected via valid Distributed Logical Router configurations
    This occurred due to an error that prevented NSX Manager from being updated with the latest NSX Controller state. During this error condition, NSX failed to sync the SSL state (<controllerConfig><sslEnabled>true/false</sslEnabled></controllerConfig>) to the ESX host after a reboot of NSX Manager.

    This issue was resolved in NSX 6.1.3.

  • Unable to upgrade to NSX vSphere Controllers from vCNS 5.0.x or 5.1.x
    If the vCNS to NSX vSphere migration path started from vCNS version 5.0.x or 5.1.x, NSX vSphere Controller deployment failed due to database schema changes across releases.

    This issue was resolved in NSX 6.1.3.

  • Host prep/VIB installation failed when ESXi host was configured for lockdown mode
    Host preparation and installation of VXLAN VIBs failed when ESXi host was configured for lockdown mode.

    This issue was resolved in NSX 6.1.3.

  • Server Certificate Validation for SSL VPN Linux and OS X clients
    We've taken customer feedback and improved the way we manage trust for our Mac and Linux SSLVPN clients. We're now making use of standard tools available on these platforms to establish better trust with our SSLVPN server. The Windows VPN client already takes advantage of the platform trust store when installed with "Check Certificates" enabled. For more information, see the NSX Administration Guide.

    This issue was resolved in NSX 6.1.3.

  • On OSPF-enabled NSX Edge Services Gateways, OSPF adjacency was not established and its negotiation was stuck in the 2-way state
    OSPF adjacency failed to come up and remained stuck in two-way state.
    Dynamic routing protocols were not supported to run on sub interfaces.

    This issue was resolved in NSX 6.1.3.

  • Enabling Equal-Cost Multi-Path routing (ECMP) on a Logical Router disabled firewall
    When ECMP was enabled on the Global Routing tab, firewall was automatically disabled.

    This issue was resolved in NSX 6.1.3.

  • Configuring Layer 2 Bridging on a Distributed Logical Router failed
    Configuring Layer 2 Bridging on a Distributed Logical Router in NSX for vSphere 6.1.2 failed with error User is not authorized to access object edge-XX and feature edge.bridging. See the VMware Knowledge Base article 2099414 for details.

    This issue was resolved in NSX 6.1.3.

  • Two unequal cost paths installed in FIB
    When NSX Edge has a static route for a network and it also learns a dynamic route for the same route, the static route is chosen correctly over the dynamic route as static route is preferred. However, when the interface corresponding to the static route was toggled, the FIB incorrectly ended up installing two paths to this network.

    This issue was resolved in NSX 6.1.3.

  • SSL VPN Mac client for OS X Yosemite displayed certificate authentication error
    Since Yosemite did not use /Library/StartupItems/ as a startup item, the VMware startup script was not executed when the machine booted up.

    This issue was resolved in NSX 6.1.3.

  • Firewall Rule publish failed due to whitespace insertion
    Firewall Rule publish was failing because IP translation inserted whitespaces in generated IP ranges when nested security groups had excluded members in it.

    This issue was resolved in NSX 6.1.3.

  • Virtual machines experienced a network interruption of up to 30 seconds after being vMotioned from one ESXi host to another when Distributed Firewall rules had security groups in the Source and/or Destination fields
    For more information, see the VMware Knowledge Base article 2110197.

    This issue was resolved in NSX 6.1.3.

  • Firewall IP ruleset with spaces was accepted by firewall user interface but not published to hosts
    An IP ruleset with intervening spaces, such as '10.10.10.2 ,10.10.10.3' was accepted in the firewall user interface, but the rules were not published to the hosts.

    This issue was resolved in NSX 6.1.3.

  • Simultaneous deployment of a high number of virtual machines resulted in a network adapter connection failure
    HA aborted several failover attempts for virtual machines after deployment and no dvPort data was loaded. The affected virtual machines were marked to start with their network adapters disconnected.

    This issue was resolved in NSX 6.1.3.

  • Adding Logical Switch to a Security Group failed
    Adding a new Logical Switch or editing an existing Logical Switch as the include or exclude value in a Service Composer security group failed to complete, and the user interface appeared to hang.

    This issue was resolved in NSX 6.1.3.

  • Cannot view VM settings from Web Client if VM had security tags
    Viewing a VM's settings from the vSphere Web Client failed for users with no NSX role and displayed the error status code = 403, status message = [Forbidden] if the VM included security tag information.

    This issue was resolved in NSX 6.1.3.

The following issues were resolved in the 6.1.2 release:

  • Arp filter for the forwarding/uplink interface IP address is missing from VDR
    The ARP filter for the forwarding/uplink interface IP address was missing from the NSX Edge control VM, which led to instances in which the NSX distributed logical router (DLR) responded to ARP requests that the control VM should have handled.

    Enable OSPF. This has been fixed in NSX 6.1.2.

  • vNICs get ejected because of insufficient ESXi heap memory
    Based on the number of filters, firewall rules, and grouping constructs on an ESXi host, the allocated heap memory on the host may be exceeded. This results in the vNICs being disconnected.
    The ESXi heap memory has been increased to 1.5 GB in this release.

    This issue was resolved in NSX 6.1.2.

  • When objects used in firewall rules are renamed, the new name is not reflected on the Firewall table

    This issue was resolved in NSX 6.1.2.

  • Request to add support for NTLM authentication in NSX Edge Load Balancer

    This issue was resolved in NSX 6.1.2.

  • NSX vSphere CPU licenses are displayed as VM licenses
    NSX vSphere CPU entitlements are displayed as VM entitlements in the vSphere Licensing tab. For example, if a customer has licenses for 100 CPUs, the user interface displays 100 VMs.

    This issue was resolved in NSX 6.1.2.

  • Implicit deny rule for BGP filters created on Edge Services Gateway but not on Logical Router
    When a BGP outbound neighbor filter is configured on an Edge Services Gateway, only prefixes with explicit accept policy are advertised. Hence an implicit deny rule is created automatically. On a Logical Router, all prefixes are advertised unless explicitly blocked.

    This issue was resolved in NSX 6.1.2.

The following issue was resolved in the 6.1.1 release:

  • NSX appliances vulnerable to BASH Shellshock security vulnerability
    This patch updates Bash libraries in the NSX appliances to resolve multiple critical security issues, commonly referred to as Shellshock. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifiers CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, and CVE-2014-7187 to these issues.
    To address this vulnerability, you must upgrade all NSX appliances. To upgrade, follow the instructions in the NSX Installation and Upgrade Guide.

    This issue was resolved in NSX 6.1.1.

The following issues were resolved in the 6.1.0 release:

  • Microsoft Clustering Services failover does not work correctly with Logical Switches
    When virtual machines send ARP probes as part of the duplicate address detection (DAD) process, the VXLAN ARP suppression layer responds to the ARP request. This causes the IP address acquisition to fail, which results in failure of the DAD process.

    This issue was resolved in NSX 6.1.0.

  • NSX Manager does not restore correctly from backup
    After NSX Manager is restored from backup, communication channels with Logical Router control virtual machine do not recover correctly. Because of this, logical switches and portgroups cannot be connected to and disconnected from the Logical Router.

    This issue was resolved in NSX 6.1.0.

  • Logical routing configuration does not work in stateless environment
    When using stateless ESXi hosts with NSX, the NSX Controller may send distributed routing configuration information to the hosts before the distributed virtual switch is created. This creates an out of sync condition, and connectivity between two hosts on different switches will fail.

    This issue was resolved in NSX 6.1.0.

  • Enabling HA on a deployed Logical Router causes the router to lose its distributed routes on ESXi hosts
    The Logical Router instance is deleted and re-created on ESXi hosts as part of the process of enabling HA. After the instance is re-created, routing information from the router's control virtual machine is not re-synced correctly. This makes the router lose its distributed routes on ESXi hosts.

    This issue was resolved in NSX 6.1.0.

  • REST request fails with error HTTP/1.1 500 Internal Server Error
    When Single Sign-On (SSO) is not configured correctly, all REST API calls fail with this message because NSX cannot validate the credentials.

    This issue was resolved in NSX 6.1.0.

  • When navigating between NSX Edge devices, vSphere Web Client hangs or displays a blank page

    This issue was resolved in NSX 6.1.0.

  • HA enabled Logical Router does not redistribute routes after upgrade or redeploy
    When you upgrade or redeploy a Logical Router which has High Availability enabled on it, the router does not redistribute routes.

    This issue was resolved in NSX 6.1.0.

  • Cannot configure OSPF on more than one NSX Edge uplink
    It is not possible to configure OSPF on more than one of the NSX Edge uplinks.

    This issue was resolved in NSX 6.1.0.

  • Error configuring IPSec VPN
    When you configure the IPSec VPN service, you may see the following error:
    [Ipsec] The localSubnet: xxx.xxx.xx.x/xx is not reachable, it should be reachable via static routing or one subnet of internal Edge interfaces.

    This issue was resolved in NSX 6.1.0.

  • Problems deleting EAM agencies
    In order to successfully remove EAM agencies from ESX Agent Manager (EAM), the NSX Manager that deployed the services corresponding the EAM Agencies must be available.

    This issue was resolved in NSX 6.1.0.

  • No warning displayed when a logical switch used by a firewall rule is being deleted
    You can delete a logical switch even if it is in use by a firewall rule. The firewall rule is marked as invalid, but the logical switch is deleted without any warning that the switch is used by the firewall rule.

    This issue was resolved in NSX 6.1.0.

  • Load balancer pool member displays WARNING message
    Even though a load balancer pool member shows a WARNING message, it can still process traffic. You can ignore this message.

    This issue was resolved in NSX 6.1.0.

  • Cannot configure untagged interfaces for a Logical Router
    The VLAN ID for the vSphere distributed switch to which a Logical Router connects cannot be 0.

    This issue was resolved in NSX 6.1.0.

  • L2 VPN with IPV6 is not supported in NSX 6.1.x releases

    This issue was resolved in NSX 6.1.0.

  • Firewall rules that use invalid logical switches as source or destination are displayed
    A logical switch can be deleted independently of the firewall rule. Since no confirmation message is displayed before deletion of a logical switch, you may delete a logical switch without being aware that it is being used in a firewall rule.

    This issue was resolved in NSX 6.1.0.

  • After upgrade from 5.5 to 6.0.x, VXLAN connectivity fails if enhanced LACP teaming is enabled
    When a data center has at least one cluster with enhanced LACP teaming enabled, communication between two hosts in any of the clusters may be affected. This issue does not happen when upgrading from NSX 6.0.x to NSX 6.1.

    This issue was resolved in NSX 6.1.0.

  • Policy configuration is not updated until there is change in configuration

    This issue was resolved in NSX 6.1.0.

Document Revision History

  • 01 October 2015: First edition for NSX 6.1.5.
  • 22 Nov 2015: Second edition for NSX 6.1.5. Clarified the would-block issue (1328589). This issue remains a known issue until a fix is provided in vSphere. In NSX 6.1.5 and 6.2.0, NSX adds mitigations for the issue.