VMware

NSX for vSphere 6.1.4 Release Notes

NSX for vSphere 6.1.4 | 27 May 2015 | Build 2691049

What's in the Release Notes

The release notes cover the following topics:

Important

If your installation is running NSX 6.1.3, do not upgrade to NSX 6.1.4. See Attempt to delete existing NSX Edge Gateway fails in an environment upgraded to NSX 6.1.4, later in this document.

What's New

See what's new and changed in 6.1.4, 6.1.3, 6.1.2, 6.1.1, 6.1.0.

New in 6.1.4

Changes introduced in NSX vSphere 6.1.4:

  • Security fixes: This release completes a series of fixes to address the Skip-TLS (CVE-2014-6593), FREAK (CVE-2015-0204), and POODLE (CVE-2014-3566) vulnerabilities, as well as fixes for other issues. See the Resolved Issues section of this document. Please check that any third party components (such as third party partners solutions) support the updated JRE and OpenSSL versions used in NSX.
     
    For details see:
  • Compatibility with vSphere 6.0: NSX vSphere 6.1.4 is compatible with vSphere 6.0. However, the new vSphere features introduced in vSphere 6.0 have not been tested with NSX vSphere. These new vSphere features should not be used in environments where NSX vSphere is installed, as they are unsupported. For a list of specific NSX vSphere limitations with respect to vSphere 6.0, see the VMware Knowledge Base article 2110197.
  • Ability to force publishing of configuration to newly deployed NSX Controllers: NSX vSphere 6.1.4 introduces a Force Sync Controller command to push logical switch information to newly deployed NSX controllers. The following API call to NSX Manager will push the original configuration back to each of the newly deployed controllers.
    Method:
    PUT https://<NSX-Manager-IP>/api/2.0/vdn/controller/synchronize?
  • OSPF on sub-interfaces: OSPF is now supported on sub-interfaces (Fixed Issue 1436023).
  • End of MSRPC support in ALG rules: NSX Edge no longer supports the MSRPC protocol for Application Level Gateway rules on the Edge Firewall.

New in 6.1.3

Changes introduced in NSX vSphere 6.1.3:

  • Dynamic routing protocols are supported on sub-interfaces.
  • ECMP and Logical Firewall are supported at the same time with logical routing.

New in 6.1.2

NSX vSphere 6.1.2 updated NSX Edge and NSX Controllers to OpenSSL 101j and addresses several other CVEs. This release included an updated API call to address the POODLE vulnerability. Using this API call, you can disable SSLv3 on specific NSX Edges in your environment. For more information, see Resolved Issues.

New in 6.1.1

NSX vSphere 6.1.1 provided patches for all NSX appliances to address the BASH Shellshock security vulnerability.

New in 6.1.0

NSX vSphere 6.1 included multiple new features as well as operations, consumption, and hardening enhancements:

  • Highly available NSX Edge clusters with faster uplink speeds: NSX enables you to create highly available and distributed NSX Edge clusters, provides high-bandwidth uplink connection to physical networks, and also ensures active-active redundancy at network virtualization edge. ECMP on NSX Edge allows high-throughput aggregate North-South bandwidth and enables scaleout of the NSX Edge.
  • Enhanced micro-segmentation and firewall operations: NSX 6.1 improves micro-segmentation capabilities by providing improved provisioning, troubleshooting, and monitoring with NSX Distributed and Edge Firewalls. There is a new unified interface for configuring both Distributed and Edge firewalls. Integration of NSX with VMware vRealize Automation 6.1 (vRA 6.1, formerly VMware vCloud Automation Center) allows administrators to integrate their security automation workflows with their compute automation workflows. In addition, NSX 6.1 enables traffic redirection to partner products like next-generation firewalls and intrusion prevention services.
  • Connect multiple data centers or offer hybrid cloud services in Software Defined Data Center (SDDC): Using Layer 2 VPN on NSX Edge, IT teams can migrate workloads, consolidate data centers, or create stretched application tiers across multiple data centers. Service providers can offer tenant onboarding and cloud bursting services where tenant application networks are preserved across data centers without the need for NSX on customer premises.
  • Unified IP Address management across entire data center : With DHCP Relay, you can integrate existing DHCP services available in physical data centers into SDDC. This ensures a consistent IP addressing policy and easy IP management across the data center. NSX vSphere 6.1 supports multiple DHCP servers on a single logical router and allows multiple, existing DHCP servers to be integrated.
  • NSX Load Balancer Enhancement: To allow load balancing of and high availability for more applications hosted in NSX, NSX 6.1.0 introduced UDP and FTP load balancing. This allows NSX to provide load balancing for applications such as syslog, NTP, DNS.
  • Use Application Delivery Controllers (ADCs) in a software-defined data center (SDDC) context: NSX 6.1 allows customers using NSX partner ADCs to protect their investment and leverage advanced ADC services from compatible vendors.
  • Advanced host or network security services within SDDC: Enhanced partner integration with the NSX Service Composer supports multiple security services including suite solutions that comprise host versus network based services in a single policy.
  • Dynamic and secure self-service in SDDC: NSX 6.1 with vCloud Automation Center ® helps you optimize resource utilization and scale by dynamically connecting self-service applications to NSX logical networks while ensuring that infrastructure security policies are automatically applied in order to isolate and protect applications. Refer to the VMware vCloud Automation Center Release Notes for feature list.

System Requirements and Installation

The VMware Product Interoperability Matrix provides details about the compatibility of current and previous versions of VMware products and components, such as VMware vCenter Server.

See the notes above regarding Compatibility with vSphere 6.0.

Upgrade Notes

Versions 5.5.x, 6.0.x, 6.1.0, 6.1.1, and 6.1.2 can be directly upgraded to 6.1.4.

Important: If your installation is running NSX 6.1.3, do not upgrade to NSX 6.1.4. See Attempt to delete existing NSX Edge Gateway fails in an environment upgraded to NSX 6.1.4, later in this document.

To upgrade NSX vSphere to 6.1.4, follow the steps below:

  1. Upgrade NSX Manager and all NSX components to 6.1.4. See the NSX Installation and Upgrade Guide for instructions. If you are upgrading from vCNS 5.5.x, please see Resolved Issue 1429432.
    If you do not want to upgrade vCenter Server and ESXi to 6.0, you are done with the upgrade procedure.
    If you want to upgrade vCenter Server and ESXi, complete the remaining steps in this procedure.
  2. Upgrade vCenter Server to 6.0. See VMware vSphere 6.0 Documentation for instructions.
  3. To upgrade without any downtime, identify a subset of the hosts in your environment that you can start upgrading. Place these hosts in maintenance mode.
  4. Upgrade ESXi to 6.0. See VMware vSphere 6.0 Documentation for instructions.
    Depending on the host settings and upgrade method, the hosts are either automatically rebooted or you have to manually reboot them.
    When the hosts boot up, NSX Manager pushes the ESXi 6.0 VIBs to the hosts.
  5. When the hosts show reboot required on the Hosts and Clusters tab in the left hand side of the vSphere Web Client, reboot the hosts a second time.
    NSX VIBs for ESXi 6.0 are now enabled.
  6. Take the upgraded hosts out of maintenance mode.
  7. Repeat steps 3 through 6 for the next subset of hosts till all the hosts in your environment are upgraded.

Known Issues

Known issues are grouped as follows:

Installation and Upgrade Issues

DVPort fails to enable with "Would block" due to host prep issue
On an NSX-enabled ESXi host, the DVPort fails to enable with "Would block" due to a host preparation issue. When this occurs, the error message first noticed varies (for example, this may be seen as a VTEP creation failure in VC/hostd.log, a DVPort connect failure in vmkernel.log, or a 'SIOCSIFFLAGS' error in the guest). This happens when VIBs are loaded after the DVS properties are pushed by vCenter. This may happen during upgrade.
 
Workaround: An additional reboot of the ESXi host solves this.

SSO cannot be reconfigured after upgrade
When the SSO server configured on NSX Manager is the one native on vCenter server, you cannot reconfigure SSO settings on NSX Manager after vCenter Server is upgraded to version 6.0 and NSX Manager is upgraded to version 6.1.3.
 
Workaround: None.

Service Deployment UI displays error vAppConfig not present for VM
Workaround: If you see the above error, check for the following:

  1. Deployment of the service virtual machine is complete.
  2. No tasks such as cloning, re-configuring, etc., are in progress for that virtual machine in the vCenter Server task pane.

After steps 1 and 2 are complete, delete the service virtual machine. On the Service Deployment UI, the deployment is seen as Failed. On clicking the Red icon, an alarm regarding the agent VM not available is displayed for the host. When you resolve the alarm, the virtual machine is redeployed and powered on.

vSphere Web Client does not display Networking and Security tab after backup and restore
When you perform a backup and restore operation after upgrading to NSX vSphere 6.1.3, the vSphere Web Client does not display the Networking and Security tab.
 
Workaround: When an NSX Manager backup is restored, you are logged out of the Appliance Manager. Wait a few minutes before logging in to the vSphere Web Client.

Versioned deployment spec needs to be updated to 6.0.* if using vCenter Server 6.0 and ESX 6.0.
Workarounds: The workaround depends on whether the partner is currently on vCloud Networking and Security or NSX for vSphere.

  • Partners that have NetX solutions registered with vCloud networking and Security must update registration to include a VersionedDeploymentSpec for 6.0.* with the corresponding OVF using REST API calls.
    1. Upgrade vSphere from 5.5 to 6.0.
    2. Add versioned deployment specification for 6.0.x using the following API call:
      POST https://<vCNS-IP>/api/2.0/si/service/<service-id>/service  \
      deploymentspec/versioneddeploymentspec
      <versionedDeploymentSpec> <hostVersion>6.0.x</hostVersion> <ovfUrl>http://sample.com/sample.ovf</ovfUrl> <vmciEnabled>true</vmciEnabled> </versionedDeploymentSpec>

      The URL for the OVF file is provided by the partner.
    3. Update service by using the following REST call
      POST https://<vsm-ip>/api/2.0/si/service/config?action=update
    4. Resolve the EAM alarm by following the steps below.
      1. In vSphere Web Client, click Home and then click Administration.
      2. In Solution, select vCenter Server Extension.
      3. Click vSphere ESX Agent Manager and then click the Manage tab.
      4. On failed agency status, right click and select Resolve All Issues.
  • If partners that have NetX solutions registered with NSX upgrade vSphere to 6.0, they must update registration to include a VersionedDeploymentSpec for 6.0.* with the corresponding OVF. Follow the steps below:

    1. Add versioned deployment specification for 6.0.* using the following steps.
      1. In vSphere Web Client, click Home and then click Networking and Security.
      2. Click Service Definitions and then Click "Service Name".
      3. Click Manage and then click Deployment.
      4. Click + and add ESX versions as 6.0.* and the OVF URL with the corresponding service virtual machine URL.
      5. Click OK.
    2. Resolve the issue by following the steps below.
      1. Click Installation.
      2. Click Service Deployments.
      3. Select the deployment and click Resolve.

After upgrading NSX vSphere from 6.0.7 to 6.1.3, vSphere Web Client crashes on login screen
After upgrading NSX Manager from 6.0.7 to 6.1.3, you will see exceptions displayed on the vSphere Web Client UI login screen. You will not be able to login and perform operations on either vCenter or NSX Manager.
 
Workaround: Log into VCVA as root and restart the vSphere Web Client service.

If vCenter is rebooted during NSX vSphere upgrade process, incorrect Cluster Status is displayed
When you do host prep in an environment with multiple NSX prepared clusters during an upgrade and the vCenter Server gets rebooted after at least one cluster has been prepared, the other clusters may show Cluster Status as Not Ready instead of showing an Update link. The hosts in vCenter may also show Reboot Required.
 
Workaround: Do not reboot vCenter during Host Preparation.

After upgrading from vCloud Networking and Security 5.5.3 to NSX vSphere 6.0.5 or later, NSX Manager does not start up if you are using an SSL certificate with DSA-1024 keysize
SSL certificates with DSA-1024 keysize are not supported in NSX vSphere 6.0.5 onwards, so the upgrade is not successful.
 
Workaround: Import a new SSL certificate with a supported keysize before starting the upgrade.

NSX Edge upgrade fails if L2 VPN is enabled on the Edge
L2 VPN configuration update from 5.x or 6.0.x to 6.1 is not supported. Hence, Edge upgrade fails if it has L2 VPN configured on it.
 
Workaround: Delete L2 VPN configuration before upgrading NSX Edge. After the upgrade, re-configure L2 VPN.

SSL VPN does not send upgrade notification to remote client
SSL VPN gateway does not send an upgrade notification to the user. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to the remote user and that they must update their clients.

After upgrading NSX from version 6.0 to 6.0.x or 6.1, NSX Edges are not listed on the UI
When you upgrade from NSX 6.0 to NSX 6.0.x or 6.1, the vSphere Web Client plug-in may not upgrade correctly. This may result in UI display issues such as missing NSX Edges.
This issue is not seen if you are upgrading from NSX 6.0.1 or later.
 
Workaround: Follow the steps below.

  1. In vCenter mob, click Content.
  2. In the Value column, click ExtensionManager.
  3. Look for [extensionList["com.vmware.vShieldManager"] and copy the string.
  4. In the Methods area, click UnregisterExtension.
  5. In the Value field, paste the string that you copied in step 3.
  6. Click Invoke Method.

This ensures deployment of the latest plug-in package.

vSphere Distributed Switch MTU does not get updated
If you specify an MTU value lower than the MTU of the vSphere distributed switch when preparing a cluster, the vSphere Distributed Switch does not get updated to this value. This is to ensure that existing traffic with the higher frame size isn't unintentionally dropped.
 
Workaround: Ensure that the MTU you specify when preparing the cluster is higher than or matches the current MTU of the vSphere distributed switch. The minimum required MTU for VXLAN is 1550.

If not all clusters in your environment are prepared, the Upgrade message for Distributed Firewall does not appear on the Host Preparation tab of Installation page
When you prepare clusters for network virtualization, Distributed Firewall is enabled on those clusters. If not all clusters in your environment are prepared, the upgrade message for Distributed Firewall does not appear on the Host Preparation tab.
 
Workaround: Use the following REST call to upgrade Distributed Firewall:

PUT https://vsm-ip/api/4.0/firewall/globalroot-0/state/vsm-ip

If a service group is modified after the upgrade to add or remove services, these changes are not reflected in the firewall table
User created service groups are expanded in the Edge Firewall table during upgrade - i.e., the Service column in the firewall table displays all services within the service group. If the service group is modified after the upgrade to add or remove services, these changes are not reflected in the firewall table.
 
Workaround: Create a new service group with a different name and then consume this service group in the firewall rule.

Guest Introspection installation fails with error
When installing Guest Introspection on a cluster, the install fails with the following error:
Invalid format for VIB Module
 
Workaround: In the vCenter Web Client, navigate to vCenter Home > Hosts and Clusters and reboot the hosts that have Reboot Required next to them in the left hand side inventory.

Service virtual machine deployed using the Service Deployments tab on the Installation page does not get powered on
Workaround: Follow the steps below.

  1. Manually remove the service virtual machine from the ESX Agents resource pool in the cluster.
  2. Click Networking and Security and then click Installation.
  3. Click the Service Deployments tab.
  4. Select the appropriate service and click the Resolve icon.
    The service virtual machine is redeployed.

If a service profile created in 6.0.x is bound to both security group and distributed portgroup or logical switch, Service Composer firewall rules are out of sync after upgrade to NSX 6.1.x
If a service profile binding was done to both security group and distributed portgroup or logical switch in 6.0.x, Service Composer rules are out of sync after the upgrade to 6.1. Rules cannot be published from the Service Composer UI.
 
Workaround: Follow the steps below.

  1. Unbind the service profile from the distributed portgroup or logical switch through the Service Definition UI.
  2. Create a new security group with the required distributed portgroup or logical switch as a member of that security group.
  3. Bind the service profile to the new security group through the Service Definition UI.
  4. Synchronize the firewall rules through the Service Composer UI.

General Issues

Unable to power on guest virtual machine
When you power on a guest virtual machine, the error All required agent virtual machines are not currently deployed may be displayed.
 
Workaround: Follow the steps below.

  1. In the vSphere Web Client, click Home and then click Administration.
  2. In Solution, select vCenter Server Extension.
  3. Click vSphere ESX Agent Manager and then click the Manage tab.
  4. Click Resolve.

Security policy name does not allow more than 229 characters
The security policy name field in the Security Policy tab of Service Composer can accept up to 229 characters. This is because policy names are prepended internally with a prefix.
 
Workaround: None.

NSX Manager Issues

After NSX Manager backup is restored, REST call shows status of fabric feature "com.vmware.vshield.vsm.messagingInfra" as "Red"
When you restore the backup of an NSX Manager and check the status of fabric feature "com.vmware.vshield.vsm.messagingInfra" using a REST API call, it is displayed as "Red" instead of "Green".
 
Workaround: Use the following REST API call to reset communication between NSX Manager and a single host or all hosts in a cluster.
POST https://<nsxmgr-ip>/api/2.0/nwfabric/configure?action=synchronize
<nwFabricFeatureConfig>
<featureId>com.vmware.vshield.vsm.messagingInfra</featureId>
<resourceConfig>
    <resourceId>HOST/CLUSTER MOID</resourceId>
</resourceConfig>
</nwFabricFeatureConfig>

Syslog shows host name of backed up NSX Manager on the restored NSX Manager
Suppose the host name of the first NSX Manager is A and a backup is created for that NSX Manager. Now a second NSX Manager is installed and configured to the same IP address as the old Manager according to backup-restore docs, but host name is B. Restore is run on this NSX Manager. The restored NSX Manager shows host name A just after restore and host name B again after reboot.
 
Workaround: Host name of second NSX Manager should be configured to be same as the backed up NSX Manager.

CA signed certificate import needs an NSX Manager reboot before becoming effective
When you import an NSX Manager certificate signed by CA, the newly imported certificate does not become effective until NSX Manager is rebooted.
 
Workaround: Reboot NSX Manager by logging in to the NSX Manager virtual appliance or by using the reboot CLI command.

Networking and Security Tab not displayed in vSphere Web Client
After vSphere is upgraded to 6.0, you cannot see the Networking and Security Tab when you log in to the vSphere Web Client with the root user name.
 
Workaround: Log in as administrator@vsphere.local or as any other vCenter user which existed on vCenter Server before the upgrade and whose role was defined in NSX Manager.

Service Composer goes out of sync when policy changes are made while one of the Service Managers is down.
This is related to instances of multiple Services/Service Managers registered and policies created referencing those services. If changes are made in Service Composer to such a policy when one of those Service Managers is down, the changes fail because of callback failure to the Service Manager that is down. As a result, Service Composer goes out of sync.
 
Workaround: Ensure the Service Manger is responding and then issue a force sync from Service Composer.

Cannot remove and re-add a host to a cluster protected by Guest Introspection and third-party security solutions
If you remove a host from a cluster protected by Guest Introspection and third-party security solutions by disconnecting it and then removing it from vCenter Server, you may experience problems if you try to re-add the same host to the same cluster.
 
Workaround: To remove a host from a protected cluster, first put the host in maintenance mode. Next, move the host into an unprotected cluster or outside all clusters and then disconnect and remove the host.

vMotion of NSX Manager may display error Virtual ethernet card Network adapter 1 is not supported
You can ignore this error. Networking will work correctly after vMotion.

NSX Edge and Logical Routing Issues

Attempt to delete existing NSX Edge Gateway fails in an environment upgraded to NSX 6.1.4
In NSX installations upgraded from 6.1.3 to 6.1.4, the existing NSX Edge Gateways cannot be deleted after the upgrade to 6.1.4. This issue does not affect new Edge Gateways created after the upgrade. Installations that upgraded directly from 6.1.2 or earlier are not affected by this issue. To fix this problem, see VMware Knowledge Base article 2117804 and contact VMware support for assistance. See also VMware Knowledge Base article 2119020 for information about NSX 6.1.3.

Connectivity loss after removing a logical interface (LIF) in installations with dynamic routing
A problem was identified in the NSX Logical Router (Edge/DLR) when using dynamic routing (OSPF & BGP) that will cause network connectivity loss after removing a LIF. This affects NSX versions 6.0.x through 6.1.4.
 
In NSX installations that use dynamic routing, each LIF has a redistribution rule index ID associated with it. When a user deletes a LIF in such installations, the index IDs assigned to some active LIFs may change. This index modification can result in a temporary loss of network connectivity for LIFs whose index IDs are changed. If the LIF deletion is serialized, you will see 5-30 seconds of disruption on affected LIFs after each LIF deletion. If the LIF deletion is done in bulk, you will see a total of 5-30 seconds of disruption on affected LIFs.
 
Workarounds: You can avoid this issue using one of the following workarounds:

  • For each LIF created in your deployment, create an IP prefix redistribution rule with the 'Permit' action. Later, when a user wants to delete a LIF, rather than deleting the LIF, the user should edit the LIF's IP prefix rule and change the action from 'Permit' to 'Deny'. The route for that particular LIF will be withdrawn. Retain the LIF(s) that need to be removed until they can be deleted during a maintenance window, when a brief connectivity outage is acceptable. The user must remove the redistribution rule when the associated LIF is deleted.
  • Use static routing instead of dynamic routing, with the understanding that peer failure detection is lost with this workaround.
  • If your workflow allows it, avoid the immediate LIF deletion and instead use a redistribute filter to stop the advertisement of the LIF that needs to be removed. Stage the LIF(s) that need to be removed until they can be deleted during a maintenance window, when a brief connectivity outage is acceptable.

When a BGP neighbor filter rule is modified, the existing filters may not be applied for up to 40 seconds
When BGP filters are applied to an NSX Edge running IBGP, it may take up to 40 seconds for the filters to be applied on the IBGP session. During this time, NSX Edge may advertise routes which are denied in the BGP filter for the IBGP peer.
 
Workaround: None.

After enabling ECMP on a Logical Router, northbound Edge does not receive prefixes from the Logical Router
 
Workaround: Follow the steps below:

  1. Disable ECMP on the Logical Router.
  2. Disable OSPF.
  3. Enable ECMP.
  4. Enable OSPF.

If Certificate Authentication is enabled under Authentication configuration of SSL VPN-Plus service, connection to the SSL VPN server from an older version of Windows client fails
If Certificate Authentication is enabled, the TLS handshake between an older version of Windows client and the latest version of SSL VPN fails. This prevents the Windows client from connecting to SSL VPN. This issue does not occur for Linux & Mac clients or for a browser based connection to SSL VPN.
 
Workaround: Upgrade the Windows client to the latest version i.e. 6.1.4.

Upgrade of standalone NSX Edge as L2 VPN client is not supported
 
Workaround: You must deploy a new standalone Edge OVF and reconfigure the appliance settings.

When the direct aggregate network in local and remote subnet of an IPsec VPN channel is removed, the aggregate route to the indirect subnets of the peer Edge also disappear
When there is no default gateway on Edge and you remove all of the direct connect subnets in local subnets and part of the remote subnets at the same time when configuring IPsec, the remaining peer subnets become unreachable by IPsec VPN.
 
Workaround: Disable and re-enable IPsec VPN on NSX Edge.

SSL VPN Plus logon/logoff script modify not working
The modify script is correctly reflected on the vSphere Web Client but not on the gateway.
 
Workaround: Delete the original script and add again.

Adding a route that is learned through protocol as connected results in the local Forwarding Information Base (FIB) table showing both connected and dynamically learned routes
If you add a route already learned through protocol as connected, the local FIB shows both connected and dynamically learned routes. The dynamically learned route is shown as preferred over the route directly connected.
 
Workaround: Withdraw the learned route from the route advertisement so that it gets deleted from the FIB table and configure only the connected route.

If an NSX Edge virtual machine with one sub interface backed by a logical switch is deleted through the vCenter Web Client user interface, data path may not work for a new virtual machine that connects to the same port
When the Edge virtual machine is deleted through the vCenter Web Client user interface (and not from NSX Manager), the vxlan trunk configured on dvPort over opaque channel does not get reset. This is because trunk configuration is managed by NSX Manager.
 
Workaround: Manually delete the vxlan trunk configuration by following the steps below:

  1. Navigate to the vCenter Managed Object Browser by typing the following in a browser window:
    https://vc-ip/mob?vmodl=1
  2. Click Content.
  3. Retrieve the dvsUuid value by following the steps below.
    1. Click the rootFolder link (for example, group-d1(Datacenters)).
    2. Click the data center name link (for example, datacenter-1).
    3. Click the networkFolder link (for example, group-n6).
    4. Click the DVS name link (for example, dvs-1)
    5. Copy the value of uuid.
  4. Click DVSManager and then click updateOpaqueDataEx.
  5. In selectionSet, add the following XML
    <selectionSet xsi:type="DVPortSelection">
        <dvsUuid>value</dvsUuid>
        <portKey>value</portKey> <!--port number of the DVPG where trunk vnic got connected-->
    </selectionSet>
  6. In opaqueDataSpec, add the following XML
    <opaqueDataSpec>
        <operation>remove</operation>
        <opaqueData>
          <key>com.vmware.net.vxlan.trunkcfg</key>
          <opaqueData></opaqueData>
        </opaqueData>
    </opaqueDataSpec>
  7. Set isRuntime to false.
  8. Click Invoke Method.
  9. Repeat steps 5 through 8 for each trunk port configured on the deleted Edge virtual machine.

When Default Originate is enabled, BGP filter for deny default route does not get applied
When BGP Default Originate is enabled on an NSX Edge, a default route gets advertised to all BGP neighbors unconditionally. If you do not want a BGP neighbor to install the default route advertised by this BGP speaker, you must configure an inbound policy on that BGP neighbor to reject the default route.
 
Workaround: Configure an inbound policy on the appropriate BGP neighbor to reject the default route.

Cannot add non-ASCII characters in bridge or tenant name for Logical Router
NSX controller APIs do not support non-ASCII characters.
 
Workaround: Use ASCII characters in bridge and tenant names. You can then edit the names to include non-ASCII characters.

SNAT and Load Balancer (with L4 SNAT) configured on a sub interface do not work
SNAT rule configuration passes on NSX Edge but the data path for the rule does not work due to RP Filter checks.
 
Workaround: Contact VMware Support for help in relaxing the RP filter check on NSX Edge.

When egress optimization is enabled for L2 VPN, load balancers with pool members stretched across site are shown as down
With egress optimization, both L2 VPN client and server have the same internal IP address. Because of this, any packet from a pool member to the load balancer does not reach NSX Edge.
 
Workaround: Do one of the following.

  • Disable egress optimization.
  • Assign an IP address for load balancer that is different from the egress-optimized IP address.

Static routes do not get pushed to hosts when a next hop address is not specified
The UI allows you to create a static route on an NSX Edge device without specifying a next hop address. If you do not specify a next hop address, the static route does not get pushed to hosts.
 
Workaround: Always specify a next hop address for static routes.

Cannot configure NSX firewall using security groups or other grouping objects defined at global scope
Administrator users defined at the NSX Edge scope cannot access objects defined at the global scope. For example, if user abc is defined at Edge scope and security group sg-1 is defined at global scope, then abc will not be able to use sg-1 in firewall configuration on the NSX Edge.
 
Workaround: The administrator must use grouping objects defined at NSX Edge scope only, or must create a copy of the global scope objects at the NSX Edge scope.

Logical Router LIF routes are advertised by upstream Edge Services Gateway even if Logical Router OSPF is disabled
Upstream Edge Services Gateway will continue to advertise OSPF external LSAs learned from Logical Router connected interfaces even when Logical Router OSPF is disabled.
 
Workaround: Disable redistribution of connected routes into OSPF manually and publish before disabling OSPF protocol. This ensures that routes are properly withdrawn.

When HA is enabled on Edge Services Gateway, OSPF hello and dead interval configured to values other than 30 seconds and 120 seconds respectively can cause some traffic loss during failover
When the primary NSX Edge fails with OSPF running and HA enabled, the time required for standby to take over exceeds the graceful restart timeout and results in OSPF neighbors removing learned routes from their Forwarding Information Base (FIB) table. This results in dataplane outage until OSPF re converges.
 
Workaround: Set the default hello and dead interval timeouts on all neighboring routers to 30 seconds for hello interval and 120 seconds for dead interval. This enables graceful failover without traffic loss.

The UI allows you to add multiple IP addresses to a Logical Router interface though it is not supported
This release does not support multiple IP addresses for a logical router interface.
 
Workaround: None.

SSL VPN does not support Certificate Revocation Lists (CRL)
A CRL can be added to NSX Edge, but this CRL is not consumed by SSL VPN.
 
Workaround: CRL is not supported, but you can enable user authentication with client certificate authentication.

Must use IP address, not hostname, to add an external authentication server to SSL VPN-Plus
You cannot use the FQDN or hostname of the external authentication server.
 
Workaround: You must use the IP address of the external authentication server.

Firewall Issues

Some versions of Palo Alto Networks VM-Series do not work with NSX Manager default settings
Some NSX 6.1.4 components disable SSLv3 by default. Before you upgrade, please check that all third-party solutions integrated with your NSX deployment do not rely on SSLv3 communication. For example, some versions of the Palo Alto Networks VM-series solution require support for SSLv3, so please check with your vendors for their version requirements.

In upgraded NSX installations, publishing a firewall rule may result in "Null Pointer exception" in Web Client
In upgraded NSX installations, publishing a firewall rule may result in a "Null Pointer exception" in the UI. The rule changes are saved. This is a display issue only.

UI shows error Firewall Publish Failed despite successful publish
If Distributed Firewall is enabled on a subset of clusters in your environment and you update an application group that is used in one or more active firewall rules, any publish action on the UI will display an error message containing IDs of the hosts belonging to the clusters where NSX firewall is not enabled.
Despite error messages, rules will be successfully published and enforced on the hosts where Distributed Firewall is enabled.
 
Workaround: Contact VMware Support to clear the UI messages.

If you delete the firewall configuration using a REST API call, you cannot load and publish saved configurations
When you delete the firewall configuration, a new default section is created with a new section ID. When you load a saved draft (that has the same section name but an older section ID), section names conflict and display the following error:
Duplicate key value violates unique constraint firewall_section_name_key
 
Workaround: Do one of the following:

  • Rename the current default firewall section after loading a saved configuration.
  • Rename the default section on a loaded saved configuration before publishing it.

When IPFIX configuration is enabled for Distributed Firewall, firewall ports in the ESXi management interface for NetFlow for vDS or SNMP may be removed
When a collector IP and port is defined for IPFIX, the firewall for ESXi management interface is opened up in the outbound direction for the specified UDP collector ports. This operation may remove the dynamic ruleset configuration on ESXi management interface firewall for the following services if they were previously configured on the ESXi host:

  • Netflow collector port configuration on vDS
  • SNMP target port configuration

Workaround: To add the dynamic ruleset rules back, you must refresh the netFlow settings for vDS in the vCenter Web Client. You must also add the snmp target again using esxcli system snmp commands. This will need to be repeated if the ESXi host is rebooted after IPFIX configuration is enabled or if the esx-vsip VIB is uninstalled from the host.

Logical Switch Issues

Creating a large number of logical switches with high concurrency using an API call may result in some failures
This issue may occur if you create a large number of logical switches using the following API call:

POST https://<nsxmgr-ip>/api/2.0/vdn/scopes/scopeID/virtualwires

Some logical switches may not be created.
 
Workaround: Re-run the API call.

Service Deployment Issues

Data Security service status is shown as UP even when IP connectivity is not established
Data Security appliance may have not received the IP address from DHCP or is connected to an incorrect port group.
 
Workaround: Ensure that the Data Security appliance gets the IP from DHCP/IP Pool and is reachable from the management network. Check if the ping to the Data Security appliance is successful from NSX/ESX.

Old service virtual machines not functioning
Old service virtual machines that were left behind on hosts during host removal from the vCenter Server remain disconnected and are unable to function when the host is added back to the same vCenter Server.
 
Workaround: Follow the steps below:

  1. Move the host from the protected cluster to either an unprotected cluster or outside all clusters. This will uninstall the service virtual machines from the host.
  2. Remove the host from the vCenter Server.

Service Insertion Issues

Deleting security rules via REST displays error
If a REST API call is used to delete security rules created by Service Composer, the corresponding rule set is not actually deleted in the service profile cache resulting in an ObjectNotFoundException error.
 
Workaround: None.

Security policy configured as a port range causes firewall to go out of sync
Configuring security policies as a port range (for example, "5900-5964") will cause the firewall to go out of sync with a NumberFormatException error.
 
Workaround: You must configure firewall security policies as a comma-separated protocol port list.

Resolved Issues

See what's resolved in 6.1.4, 6.1.3, 6.1.2, 6.1.1, 6.1.0.

The following issues have been resolved in the 6.1.4 release:

  • Fixed Issue 1448498: In VXLAN configurations with NIC Teaming on hosts where the DVS switch has 4 physical NIC (PNIC) uplinks, only one of 4 VMKNICs gets the IP address
    In NSX 6.1.3, when you configure VXLAN with a teaming source port id where the DVS has 4 physical NIC (PNIC) uplinks, NSX creates 4 VMKNICs on the host. In teaming mode, all four VMKNICs should be assigned the IP address, but due to an issue in NSX 6.1.3, only one VMKNIC on each host is assigned the IP address. This issue was resolved in NSX 6.1.4.
  • Fixed Issue 1429432: NSX Manager becomes non-responsive
    NSX Manager becomes non-responsive when it fails to recognize the network adapter. This fix replaces the e1000 network adapter of the vCNS Manager Appliance with a vmxnet3 adapter. In new installations of vCNS 5.5.4.1 and later, this fix is automatically applied. If you are upgrading from vCNS 5.5.x to NSX 6.1.4 or later, you must manually apply the fix as explained in VMware Knowledge Base article 2115459. This issue was resolved in NSX 6.1.4.
  • Fixed Issue 1443458: In installations with multiple vSphere clusters, hosts may disappear from the installation tab
    In installations with multiple vSphere clusters, the host preparation screen in NSX Manager may take approximately 3 minutes to load all clusters. The hosts may disappear from the window temporarily. This issue was resolved in NSX 6.1.4.
  • Fixed Issue 1424992: NSX Networking and Security tab of vSphere Web Client may show data service timeout error when using a large AD store
    In NSX installations with vSphere 6.0, the NSX Networking and Security tab of the vSphere Web Client may show a data service timeout error when using a large AD store. There is no workaround. Reload the Web Client in your browser. This issue was resolved in NSX 6.1.4.
  • Fixed Issue 1438242: Virtual machines connected to a VMware NSX for vSphere Logical Switch and a Distributed Router experience very low bandwidth/throughput
    This fix addresses the issue covered by VMware Knowledge Base article 2110598. This issue was resolved in NSX 6.1.4.
  • Fixed Issue 1421287: L2VPN goes down after pinging broadcast IP. tap0 interface goes down on standalone Edge.
    When pinging the broadcast address, the MAC addresses are learned but the L2VPN tunnel goes down. The tap0 interface goes down after the Edge learns a lot of MAC addresses. This issue was resolved in NSX 6.1.4.
  • Fixed Issue 1406377: High CPU usage at NSX Manager during vCenter inventory updates
    Provisioning firewall rules with a large number of security groups requires a large number of connections to the internal Postgres database. Simultaneously running CPU threads can lead to sustained high CPU on the NSX Manager server. This issue was resolved in NSX 6.1.4.
  • Fixed Issue 1334728: Long NSX Manager load times on large domain accounts
    When domain users with larger number of groups login to the vSphere Web Client, it takes an extremely long time to access the NSX Manager interface. This issue was resolved in NSX 6.1.4.
  • Fixed Issue 1409714 / 1405945: Fixes to address CVE-2014-6593 "Skip-TLS" and CVE-2015-0204 "FREAK" vulnerabilities (collectively, "SMACK" vulnerabilities)
    This fix addresses the issues generally known as the "SMACK" vulnerabilities. This includes the "FREAK" vulnerability, which affects OpenSSL based clients by allowing them to be fooled into using export grade cipher suites. SSL VPN clients have been updated with OpenSSL version 1.0.1L to address this. OpenSSL on the NSX Edge has been updated to version 1.0.1L as well.
     
    This fix also addresses the "Skip-TLS" vulnerability. The Oracle (Sun) JRE package is updated to 1.7.0_75 (version 1.7.0 update 75), because Skip-TLS affected Java versions prior to update 75. Oracle has documented the CVE identifiers that are addressed in JRE 1.7.0_75 in the Oracle Java SE Critical Patch Update Advisory for January 2015.
  • Fixed Issue 1361424: Fixes to address CVE-2014-3566 "POODLE" vulnerability
    The 6.1.4 release included changes that address the CVE-2014-3566 vulnerability (the SSLv3 vulnerability known as "POODLE"). The changes included:
    • The disabling of SSLv3 by default on NSX Manager (since 6.1.2 release). To reenable SSLv3 support on this component, contact VMware support.
    • The disabling of SSLv3 by default on the NSX Edge Load Balancer (since 6.1.4 release). To reenable SSLv3 support on this component, see VMware Knowledge Base article 2116104.
    • A new API method that allows you to disable and reenable SSLv3 on the NSX Edge SSL VPN (since 6.1.4 release). To disable and reenable SSLv3 support on this component, see VMware Knowledge Base article 2115871.
    • An update of the NSX Edge system SSL library to OpenSSL 0.9.8zc.

The following issues were resolved in the 6.1.3 release:

  • Fixed issue 1363274: VMs lose connectivity to networks that are connected via valid Distributed Logical Router configurations
    This occurred due to an error that prevented NSX Manager from being updated with the latest NSX Controller state. During this error condition, NSX failed to sync the SSL state (<controllerConfig><sslEnabled>true/false</sslEnabled></controllerConfig>) to the ESX host after a reboot of NSX Manager. This has been fixed in NSX 6.1.3.
  • Unable to upgrade to NSX vSphere Controllers from vCNS 5.0.x or 5.1.x
    If the vCNS to NSX vSphere migration path started from vCNS version 5.0.x or 5.1.x, NSX vSphere Controller deployment failed due to database schema changes across releases. This issue was resolved in NSX 6.1.3.
  • Host prep/VIB installation failed when ESXi host was configured for lockdown mode
    Host preparation and installation of VXLAN VIBs failed when ESXi host was configured for lockdown mode. This issue was resolved in NSX 6.1.3.
  • Server Certificate Validation for SSL VPN Linux and OS X clients
    We've taken customer feedback and improved the way we manage trust for our Mac and Linux SSLVPN clients. We're now making use of standard tools available on these platforms to establish better trust with our SSLVPN server. The Windows VPN client already takes advantage of the platform trust store when installed with "Check Certificates" enabled. For more information, see the See the NSX Administration Guide. This issue was resolved in NSX 6.1.3.
  • On OSPF-enabled NSX Edge Services Gateways, OSPF adjacency was not established and its negotiation was stuck in the 2-way state
    OSPF adjacency failed to come up and remained stuck in two-way state.
    Dynamic routing protocols were not supported to run on sub interfaces. This issue was resolved in NSX 6.1.3.
  • Enabling Equal-Cost Multi-Path routing (ECMP) on a Logical Router disabled firewall
    When ECMP was enabled on the Global Routing tab, firewall was automatically disabled. This issue was resolved in NSX 6.1.3.
  • Configuring Layer 2 Bridging on a Distributed Logical Router failed
    Configuring Layer 2 Bridging on a Distributed Logical Router in NSX for vSphere 6.1.2 failed with error User is not authorized to access object edge-XX and feature edge.bridging. See the VMware Knowledge Base article 2099414 for details. This issue was resolved in NSX 6.1.3.
  • Two unequal cost paths installed in FIB
    When NSX Edge has a static route for a network and it also learns a dynamic route for the same route, the static route is chosen correctly over the dynamic route as static route is preferred. However, when the interface corresponding to the static route was toggled, the FIB incorrectly ended up installing two paths to this network. This issue was resolved in NSX 6.1.3.
  • SSL VPN Mac client for OS X Yosemite displayed certificate authentication error
    Since Yosemite did not use /Library/StartupItems/ as a startup item, the VMware startup script was not executed when the machine booted up. This issue was resolved in NSX 6.1.3.
  • Firewall Rule publish failed due to whitespace insertion
    Firewall Rule publish was failing because IP translation inserted whitespaces in generated IP ranges when nested security groups had excluded members in it. This issue was resolved in NSX 6.1.3.
  • Virtual machines experienced a network interruption of up to 30 seconds after being vMotioned from one ESXi host to another when Distributed Firewall rules had security groups in the Source and/or Destination fields
    For more information, see the VMware Knowledge Base article 2110197. This issue was resolved in NSX 6.1.3.
  • Firewall IP ruleset with spaces was accepted by firewall UI but not published to hosts
    An IP ruleset with intervening spaces, such as '10.10.10.2 ,10.10.10.3' was accepted in the firewall UI, but the rules were not published to the hosts. This issue was resolved in NSX 6.1.3.
  • Simultaneous deployment of a high number of virtual machines resulted in a network adapter connection failure
    HA aborted several failover attempts for virtual machines after deployment and no dvPort data was loaded. The affected virtual machines were marked to start with their network adapters disconnected. This issue was resolved in NSX 6.1.3.
  • Adding Logical Switch to a Security Group failed
    Adding a new Logical Switch or editing an existing Logical Switch as the include or exclude value in a Service Composer security group failed to complete, and the UI appeared to hang. This issue was resolved in NSX 6.1.3.
  • Cannot view VM settings from Web Client if VM had security tags
    Viewing a VM's settings from the vSphere Web Client failed for users with no NSX role and displayed the error status code = 403, status message = [Forbidden] if the VM included security tag information. This issue was resolved in NSX 6.1.3.

The following issues were resolved in the 6.1.2 release:

  • vNICs get ejected because of insufficient ESXi heap memory
    Based on the number of filters, firewall rules, and grouping constructs on an ESXi host, the allocated heap memory on the host may be exceeded. This results in the vNICs being disconnected.
    The ESXi heap memory has been increased to 1.5 GB in this release. This issue was resolved in NSX 6.1.2.
  • When objects used in firewall rules are renamed, the new name is not reflected on the Firewall table
    This issue was resolved in NSX 6.1.2.
  • Request to add support for NTLM authentication in NSX Edge Load Balancer
    This issue was resolved in NSX 6.1.2.
  • NSX vSphere CPU licenses are displayed as VM licenses
    NSX vSphere CPU entitlements are displayed as VM entitlements in the vSphere Licensing tab. For example, if a customer has licenses for 100 CPUs, the UI displays 100 VMs. This issue was resolved in NSX 6.1.2.
  • Implicit deny rule for BGP filters created on Edge Services Gateway but not on Logical Router
    When a BGP outbound neighbor filter is configured on an Edge Services Gateway, only prefixes with explicit accept policy are advertised. Hence an implicit deny rule is created automatically. On a Logical Router, all prefixes are advertised unless explicitly blocked. This issue was resolved in NSX 6.1.2.

The following issue was resolved in the 6.1.1 release:

  • NSX appliances vulnerable to BASH Shellshock security vulnerability
    This patch updates Bash libraries in the NSX appliances to resolve multiple critical security issues, commonly referred to as Shellshock. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifiers CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, and CVE-2014-7187 to these issues.
    To address this vulnerability, you must upgrade all NSX appliances. To upgrade, follow the instructions in the NSX Installation and Upgrade Guide. This issue was resolved in NSX 6.1.1.

The following issues were resolved in the 6.1.0 release:

  • Microsoft Clustering Services failover does not work correctly with Logical Switches
    When virtual machines send ARP probes as part of the duplicate address detection (DAD) process, the VXLAN ARP suppression layer responds to the ARP request. This causes the IP address acquisition to fail, which results in failure of the DAD process. This issue was resolved in NSX 6.1.0.
  • NSX Manager does not restore correctly from backup
    After NSX Manager is restored from backup, communication channels with Logical Router control virtual machine do not recover correctly. Because of this, logical switches and portgroups cannot be connected to and disconnected from the Logical Router. This issue was resolved in NSX 6.1.0.
  • Logical routing configuration does not work in stateless environment
    When using stateless ESXi hosts with NSX, the NSX Controller may send distributed routing configuration information to the hosts before the distributed virtual switch is created. This creates an out of sync condition, and connectivity between two hosts on different switches will fail. This issue was resolved in NSX 6.1.0.
  • Enabling HA on a deployed Logical Router causes the router to lose its distributed routes on ESXi hosts
    The Logical Router instance is deleted and re-created on ESXi hosts as part of the process of enabling HA. After the instance is re-created, routing information from the router's control virtual machine is not re-synced correctly. This makes the router lose its distributed routes on ESXi hosts. This issue was resolved in NSX 6.1.0.
  • REST request fails with error HTTP/1.1 500 Internal Server Error
    When Single Sign-On (SSO) is not configured correctly, all REST API calls fail with this message because NSX cannot validate the credentials. This issue was resolved in NSX 6.1.0.
  • When navigating between NSX Edge devices, vSphere Web Client hangs or displays a blank page
    This issue was resolved in NSX 6.1.0.
  • HA enabled Logical Router does not redistribute routes after upgrade or redeploy
    When you upgrade or redeploy a Logical Router which has High Availability enabled on it, the router does not redistribute routes. This issue was resolved in NSX 6.1.0.
  • Cannot configure OSPF on more than one NSX Edge uplink
    It is not possible to configure OSPF on more than one of the NSX Edge uplinks. This issue was resolved in NSX 6.1.0.
  • Error configuring IPSec VPN
    When you configure the IPSec VPN service, you may see the following error:
    [Ipsec] The localSubnet: xxx.xxx.xx.x/xx is not reachable, it should be reachable via static routing or one subnet of internal edge interfaces. This issue was resolved in NSX 6.1.0.
  • Problems deleting EAM agencies
    In order to successfully remove EAM agencies from ESX Agent Manager (EAM), the NSX Manager that deployed the services corresponding the EAM Agencies must be available. This issue was resolved in NSX 6.1.0.
  • No warning displayed when a logical switch used by a firewall rule is being deleted
    You can delete a logical switch even if it is in use by a firewall rule. The firewall rule is marked as invalid, but the logical switch is deleted without any warning that the switch is used by the firewall rule. This issue was resolved in NSX 6.1.0.
  • Load balancer pool member displays WARNING message
    Even though a load balancer pool member shows a WARNING message, it can still process traffic. You can ignore this message. This issue was resolved in NSX 6.1.0.
  • Cannot configure untagged interfaces for a Logical Router
    The VLAN ID for the vSphere distributed switch to which a Logical Router connects cannot be 0. This issue was resolved in NSX 6.1.0.
  • L2 VPN with IPV6 is not supported in NSX 6.1.x releases
    This issue was resolved in NSX 6.1.0.
  • Firewall rules that use invalid logical switches as source or destination are displayed
    A logical switch can be deleted independently of the firewall rule. Since no confirmation message is displayed before deletion of a logical switch, you may can delete a logical switch without being aware that it is being used in a firewall rule. This issue was resolved in NSX 6.1.0.
  • After upgrade from 5.5 to 6.0.x, VXLAN connectivity fails if enhanced LACP teaming is enabled
    When a data center has at least one cluster with enhanced LACP teaming enabled, communication between two hosts in any of the clusters may be affected. This issue does not happen when upgrading from NSX 6.0.x to NSX 6.1. This issue was resolved in NSX 6.1.0.
  • Policy configuration is not updated until there is change in configuration
    This issue was resolved in NSX 6.1.0.

Document Revision History

9 June 2015: Added known issue 1328589 / 1457120 / 1456184.)
27 May 2015: Clarified issue 1446544 / 1445066. (6.1.3-to-6.1.4 upgrades not recommended.)
22 May 2015: Added Comprehensive What's New List.
15 May 2015: Added Known Issues 1445066 and 1441319.
07 May 2015: First Edition for NSX 6.1.4.