NSX vSphere 6.1 includes multiple new features as well as operations, consumption, and hardening enhancements.
- Highly available NSX Edge clusters with faster uplink speeds
Equal Cost Multi-Path (ECMP)
NSX enables you to create highly available and distributed NSX Edge clusters, provides high-bandwidth uplink connection to physical networks, and also ensures active-active redundancy at network virtualization edge - all in software. ECMP on NSX Edge allows up to 80 GBps of aggregate North-South bandwidth and enables a scaleout edge.
- Enhanced micro-segmentation and firewall operations
NSX 6.1 improves micro-segmentation capabilities by providing improved provisioning, troubleshooting, and monitoring with NSX Distributed and Edge Firewalls. There is a new unified interface for configuring both Distributed and Edge firewalls. Integration of NSX with vCAC 6.1 allows for security automation workflows to be integrated with compute automation. In addition, NSX 6.1 enables traffic redirection to network and security partner products like Next Generation Firewalls and Intrusion Prevention Services.
- Connect multiple data centers or offer hybrid cloud services in Software Defined Datacenter (SDDC)
Layer 2 VPN on NSX Edge
With Layer 2 VPN, enterprises can migrate workloads, consolidate datacenters, or create stretched application tiers across multiple datacenters. Service providers can offer tenant on-boarding and cloud bursting services where tenant application networks are preserved across datacenters without the need for NSX on customer premises.
- Unified IP Address management across entire data center
DHCP Relay
With DHCP Relay, you can integrate existing DHCP services available in physical data centers into SDDC. This ensuring a consistent IP addressing policy and easy IP management across the entire data center. NSX vSphere 6.1 supports multiple DHCP servers on a single logical router and allows multiple existing DHCP servers to be integrated.
- NSX Load Balancer Enhancement
To allow the load balancing and high availability of more applications hosted in NSX, UDP and FTP load balancing is now available on NSX. This allows the load balancing of applications such as syslog, NTP, DNS.
- Protect Application Delivery Controller (ADC) investments and seamlessly leverage them in SDDC
Tight integrations with partners to enable ADCaaS
NSX 6.1 allows customers using NSX partner ADCs to protect their investment and leverage advanced ADC services from best-of-breed vendors. This out-of-the-box solution brings operational simplicity, integrated workflows, auto deployment of resources, and a central pane for troubleshooting and monitoring both virtual and physical ADCs.
- Advanced host or network security services within SDDC
Enhanced partner integration with Service Composer supports multiple security services including suite solutions that comprise host versus network based services in a single policy.
- Dynamic and secure self-service in SDDC
NSX 6.1 with vCloud Automation Center ® helps you optimize resource utilization and scale by dynamically connecting self-service applications to NSX logical networks while ensuring that infrastructure security policies are automatically applied to isolate and protect the applications.
Refer to the VMware vCloud Automation Center Release Notes for feature list.
System Requirements and Installation
For information about system requirements and installation instructions, see the NSX Installation and Upgrade Guide.
VMware Product Interoperability Matrix provides details about the compatibility of current and previous versions of VMware products and components, such as VMware vCenter Server.
Resolved Issues
The following issues have been resolved in the 6.1 release.
-
Microsoft Clustering Services failover does not work correctly with Logical Switches
When virtual machines send ARP probes as part of the duplicate address detection (DAD) process, the VXLAN ARP suppression layer responds to the ARP request. This causes the IP address acquisition to fail, which results in failure of the DAD process.
-
NSX Manager does not restore correctly from backup
After NSX Manager is restored from backup, communication channels with Logical Router control virtual machine do not recover correctly. Because of this, logical switches and portgroups cannot be connected to and disconnected from the Logical Router.
-
Logical routing configuration does not work in stateless environment
When using stateless ESXi hosts with NSX, the NSX Controller may send distributed routing configuration information to the hosts before the distributed virtual switch is created. This creates an out of sync condition, and connectivity between two hosts on different switches will fail.
-
Enabling HA on a deployed Logical Router causes the router to lose its distributed routes on ESXi hosts
The Logical Router instance is deleted and re-created on ESXi hosts as part of the process of enabling HA. After the instance is re-created, routing information from the router's control virtual machine is not re-synced correctly. This makes the router lose its distributed routes on ESXi hosts.
-
REST request fails with error
HTTP/1.1 500 Internal Server Error
When Single Sign-On (SSO) is not configured correctly, all REST API calls fail with this message because NSX cannot validate the credentials.
-
When navigating between NSX Edge devices, vSphere Web Client hangs or displays a blank page
-
HA enabled Logical Router does not redistribute routes after upgrade or redeploy
When you upgrade or redeploy a Logical Router which has High Availability enabled on it, the router does not redistribute routes.
-
Cannot configure OSPF on more than one NSX Edge uplink
It is not possible to configure OSPF on more than one of the NSX Edge uplinks.
-
Error configuring IPSec VPN
When you configure the IPSec VPN service, you may see the following error:
[Ipsec] The localSubnet: xxx.xxx.xx.x/xx is not reachable, it should be reachable via static routing or one subnet of internal edge interfaces.
-
Problems deleting EAM agencies
In order to successfully remove EAM agencies from ESX Agent Manager (EAM), the NSX Manager that deployed the services corresponding the EAM Agencies must be available.
-
No warning displayed when a logical switch used by a firewall rule is being deleted
You can delete a logical switch even if it is in use by a firewall rule. The firewall rule is marked as invalid, but the logical switch is deleted without any warning that the switch is used by the firewall rule.
-
Load balancer pool member displays WARNING message
Even though a load balancer pool member shows a WARNING message, it can still process traffic. You can ignore this message.
-
Cannot configure untagged interfaces for a Logical Router
The VLAN ID for the vSphere distributed switch to which a Logical Router connects cannot be 0.
-
L2 VPN with IPV6 is not supported in this release
-
Firewall rules that use invalid logical switches as source or destination are displayed
A logical switch can be deleted independently of the firewall rule. Since no confirmation message is displayed before deletion of a logical switch, you may can delete a logical switch without being aware that it is being used in a firewall rule.
-
After upgrade from 5.5 to 6.0.x, VXLAN connectivity fails if enhanced LACP teaming is enabled
When a data center has at least one cluster with enhanced LACP teaming enabled, communication between two hosts in any of the clusters may be affected affected.
This issue does not happen when upgrading from NSX 6.0.x to NSX 6.1.
-
Policy configuration is not updated until there is change in configuration.
Known Issues
Known issues are grouped as follows:
Installation and Upgrade Issues
Appliance upgrade does not upgrade NSX Edge version or configuration
Operations other than redeploy and upgrade that require deployment of a new virtual machine or replacement of an existing virtual machine (such as changing size, resource pool, or datastore setting) will replace appliance with latest available version but not do a full upgrade.
Workaround: Following NSX Manager upgrade, you must upgrade NSX Edge before attempting any of the operations described above.
NSX Edge upgrade fails if L2 VPN is enabled on the Edge
L2 VPN configuration update from 5.x or 6.x to 6.1 is not supported. Hence, Edge upgrade fails if it has L2 VPN configured on it.
Workaround: Delete L2 VPN configuration before upgrading NSX Edge. After the upgrade, re-configure L2 VPN.
SSL VPN does not send upgrade notification to remote client
SSL VPN gateway does not send an upgrade notification to the user. The administrator has to manually communicate that the SSL VPN gateway (server) is updated to the remote user and that they must update their clients.
After upgrading NSX from version 6.0 to 6.0.x or 6.1, NSX Edges are not listed on the UI
When you upgrade from NSX 6.0 to NSX 6.0.x or 6.1, the vSphere Web Client plug-in may not upgrade correctly. This may result in UI display issues such as missing NSX Edges.
This issue is not seen if you are upgrading from NSX 6.0.1 or later.
Workaround: Follow the steps below.
- On the vSphere Server, navigate to the following location:
/var/lib/vmware/vsphere-client/vc-packages/vsphere-client-serenity
- Delete the following folders:
com.vmware.vShieldManager-6.0.x.1546773
com.vmware.vShieldManager-6.0.1378053
- Restart the vSphere Web Client service.
This ensures deployment of the latest plug-in package.
vSphere Distributed Switch MTU does not get updated
If you specify an MTU value lower than the MTU of the vSphere distributed switch when preparing a cluster, the vSphere Distributed Switch does not get updated to this value. This is to ensure that existing traffic with the higher frame size isn't unintentionally dropped.
Workaround: Ensure that the MTU you specify when preparing the cluster is higher than or matches the current MTU of the vSphere distributed switch. The minimum required MTU for VXLAN is 1550.
If all clusters in your environment are not prepared, the Upgrade message for Distributed Firewall does not appear on the Host Preparation tab of Installation page
When you prepare clusters for network virtualization, Distributed Firewall is enabled on those clusters. If all clusters in your environment are not prepared, the upgrade message for Distributed Firewall does not appear on the Host Preparation tab.
Workaround: Use the following REST call to upgrade Distributed Firewall:
PUT https://vsm-ip/api/4.0/firewall/globalroot-0/state
Service groups are expanded in Edge Firewall table during upgrade from vCloud Networking and Security 5.5 to NSX
User created service groups are expanded in the Edge Firewall table during upgrade - i.e., the Service column in the firewall table displays all services within the service group. If the service group is modified after the upgrade to add or remove services, these changes are not reflected in the firewall table.
Workaround: Follow the steps below:
- Re-create the service groups in the Grouping Objects tab after upgrade.
- Edit the Service column for the affected firewall rules and point to the appropriate service groups.
Guest Introspection installation fails with error
When installing Guest Introspection on a cluster, the install fails with the following error:
Invalid format for VIB Module
Workaround: In the vCenter Web Client, navigate to vCenter Home > Hosts and Clusters and reboot the hosts that display Reboot Required.
Service virtual machine deployed using the Service Deployments tab on the Installation page does not get powered on
Workaround: Follow the steps below.
- Manually remove the service virtual machine from the
ESX Agents
resource pool in the cluster.
- Click Networking and Security and then click Installation.
- Click the Service Deployments tab.
- Select the appropriate service and click the Resolve icon.
The service virtual machine is redeployed.
If a service profile created in 6.0.x is bound to both security group and distributed portgroup or logical switch, Service Composer firewall rules are out of sync after upgrade to NSX 6.1
If a service profile binding was done to both security group and distributed portgroup or logical switch in 6.0.x, Service Composer rules are out of sync after the upgrade to 6.1. Rules cannot be published from the Service composer UI.
Workaround: Follow the steps below.
- Unbind the service profile from the distributed portgroup or logical switch through the Service definition UI.
- Create a new security group with the required distributed portgroup or logical switch as a member of that security group.
- Bind the service profile to the new security group through the Service Definition UI.
- Synchronize the firewall rules through the Service Composer UI.
General Issues
NSX vSphere CPU licenses as displayed as VM licenses
NSX vSphere CPU entitlements are displayed as VM entitlements in the vSphere Licensing tab. For example, if a customer has licenses for 100 CPUs, the UI displays 100 VMs.
vSphere Web Client 5.5 server restarts and UI becomes inaccessible
If you are using the vCenter Server appliance or running vCenter Server on a virtual machine, you may run into issues if the appliance or virtual machine does not meet certain requirements.
Workaround: Ensure that the appliance or virtual machine has a minimum memory of 12 GB and an Intel or AMD x64 processor with two or more logical cores, each with a speed of at least 2 GHz.
vSphere Web Client displays error: Cannot complete the operation, see the event log for details
When you install services such as Guest Introspection or a partner appliance through the Service Deployments tab, the vSphere Web Client may display the above error. This error does not affect the installation, so you can ignore it.
Cannot remove and re-add a host to a cluster protected by Guest Introspection and third-party security solutions
If you remove a host from a cluster protected by Guest Introspection and third-party security solutions by disconnecting it and then removing it from vCenter Server, you may experience problems if you try to re-add the same host to the same cluster.
Workaround: To remove a host from a protected cluster, first put the host in maintenance mode. Next, move the host into an unprotected cluster or outside all clusters and then disconnect and remove the host.
NSX Manager Issues
vMotion of NSX Manager may display error Virtual ethernet card Network adapter 1 is not supported
You can ignore this error. Networking will work correctly after vMotion.
Cannot delete 3rd party services after restoring NSX Manager backup
Deployments of 3rd party service(s) can be deleted from vSphere Web Client only if the restored state of NSX Manager contains the 3rd party service registration.
Workaround: Take a backup of NSX Manager database after all 3rd party services have been registered.
NSX Edge Issues
Enabling Equal-Cost Multi-Path routing (ECMP) on a Logical Router disables firewall on the router control virtual machine
Workaround: None.
Dynamic routing protocols not supported on sub interfaces
Workaround: None.
Adding a route that is learned through protocol as connected results in the local Forwarding Information Base (FIB) table showing both connected and dynamically learned routes
If you add a route already learned through protocol as connected, the local FIB shows both connected and dynamically learned routes. The dynamically learned route is shown as preferred over the route directly connected.
Workaround: Withdraw the learned route from the route advertisement so that it gets deleted from the FIB table and configure only the connected route.
If an NSX Edge virtual machine with one sub interface backed by a logical switch is deleted through the vCenter Web Client user interface, data path may not work for a new virtual machine that connects to the same port
When the Edge virtual machine is deleted through the vCenter Web Client user interface (and not from NSX Manager), the vxlan trunk configured on dvPort over opaque channel does not get reset. This is because trunk configuration is managed by NSX Manager.
Workaround: Manually delete the vxlan trunk configuration by following the steps below:
- Navigate to the vCenter Managed Object Browser by typing the following in a browser window:
https://vc-ip/mob?vmodl=1
- Click Content.
- Retrieve the dvsUuid value by following the steps below.
- Click the rootFolder link (for example, group-d1(Datacenters)).
- Click the data center name link (for example, datacenter-1).
- Click the networkFolder link (for example, group-n6).
- Click the DVS name link (for example, dvs-1)
- Copy the value of uuid.
- Click DVSManager and then click updateOpaqueDataEx.
- In selectionSet, add the following XML
<selectionSet xsi:type="DVPortSelection">
<dvsUuid>value</dvsUuid>
<portKey>value</portKeyv <!--port number of the DVPG where trunk vnic got connected-->
</selectionSet>
- In opaqueDataSpec, add the following XML
<opaqueDataSpec>
<operation>remove</operation>
<opaqueData>
<key>com.vmware.net.vxlan.trunkcfg</key>
<opaqueData></opaqueData>
</opaqueData>
</opaqueDataSpec>
- Set isRuntime to false.
- Click Invoke Method.
- Repeat steps 5 through 8 for each trunk port configured on the deleted Edge virtual machine.
When Default Originate is enabled, BGP filter for deny default route does not get applied
When BGP Default Originate is enabled on an NSX Edge, a default route gets advertised to all BGP neighbors unconditionally. If you do not want a BGP neighbor to install the default route advertised by this BGP speaker, you must configure an inbound policy on that BGP neighbor to reject the default route.
Workaround: Configure an inbound policy on the appropriate BGP neighbor to reject the default route.
Implicit deny rule for BGP filters created on Edge Services Gateway but not on Logical Router
When a BGP outbound neighbor filter is configured on an Edge Services Gateway, only prefixes with explicit accept policy are advertised. Hence an implicit deny rule is created automatically. On a Logical Router, all prefixes are advertised unless explicitly blocked.
Workaround: When configuring the BGP protocol, specify the prefixes that need to be dropped even if outbound filter is created.
Cannot add non-ASCII characters in bridge or tenant name for Logical Router
NSX controller APIs do not support non-ASCII characters.
Workaround: Use ASCII characters in bridge and tenant names. You can then edit the names to include non-ASCII characters.
SNAT and Load Balancer (with L4 SNAT) configured on a sub interface do not work
SNAT rule configuration passes on NSX Edge but the data path for the rule does not work due to RP Filter checks.
Workaround: Contact VMware Support for help in relaxing the RP filter check on NSX Edge.
When egress optimization is enabled for L2 VPN, load balancer with pool members stretched across site are shown as down
With egress optimization, both L2 VPN client and server have the same internal IP. Because of this, any packet from a pool member to the load balancer does not reach NSX Edge.
Workaround: Do one of the following.
- Disable egress optimization.
- Assign an IP for load balancer that is different from egress optimized IP.
Static routes do not get pushed to hosts when a next hop address is not specified
The UI allows you to create a static route on an NSX Edge device without specifying a next hop address. If you do not specify a next hop address, the static route does not get pushed to hosts.
Workaround: Always specify a next hop address for static routes.
Cannot configure NSX firewall using security groups or other grouping objects defined at global scope
Administrator users defined at the NSX Edge scope cannot access objects defined at the global scope. For example, if user abc is defined at Edge scope and security group sg-1 is defined at global scope, then abc will not be able to use sg-1 in firewall configuration on the NSX Edge.
Workaround: The administrator must use grouping objects defined at NSX Edge scope only, or must create a copy of the global scope objects at the NSX Edge scope.
Logical Router LIF routes are advertised by upstream Edge Services Gateway even if Logical Router OSPF is disabled
Upstream Edge Services Gateway will continue to advertise OSPF external LSAs learned from Logical Router connected interfaces even when Logical Router OSPF is disabled.
Workaround: Disable redistribution of connected routes into OSPF manually and publish before disabling OSPF protocol. This ensures that routes are properly withdrawn.
When HA is enabled on Edge Services Gateway, OSPF hello and dead interval configured to values other than 30 seconds and 120 seconds respectively can cause some traffic loss during failover
When the primary NSX Edge fails with OSPF running and HA enabled, the time required for standby to take over exceeds the graceful restart timeout and results in OSPF neighbors removing learned routes from their Forwarding Information Base (FIB) table. This results in dataplane outage until OSPF reconverges.
Workaround: Set the default hello and dead interval timeouts on all neighboring routers to 30 seconds for hello interval and 120 seconds for dead interval. This enables graceful failover without traffic loss.
The UI allows you to add multiple IP addresses to a Logical Router interface though it is not supported
This release does not support multiple IP addresses for a logical router interface.
Workaround: None.
Error displayed when modifying login or logoff scripts
When you modify a login or logoff script, the following error is displayed:
ObjectNotFoundException: core-services:202: The requested object : logon1.logoff could not be found. Object identifiers are case sensitive.
.
Workaround: Delete the existing script and a new one with modified parameters.
SSL VPN does not support Certificate Revocation Lists (CRL)
You can add a CRL to an NSX Edge, but this CRL is not consumed by SSL VPN.
Workaround: CRL is not supported, but you can enable user authentication with client certificate authentication.
Must use IP address, not hostname, to add an external authentication server to SSL VPN-Plus
You cannot use the FQDN or hostname of the external authentication server.
Workaround: You must use the IP address of the external authentication server.
Firewall Issues
If you delete the firewall configuration using a REST API call, you cannot load and publish saved configurations
When you delete the firewall configuration, a new default section is created with a new section ID. When you load a saved draft (that has the same section name but an older section ID), section names conflict and display the following error:
Duplicate key value violates unique constraint firewall_section_name_key
Workaround: Do one of the following:
- Rename the current default firewall section after loading a saved configuration.
- Rename the default section on a loaded saved configuration before publishing it.
When IPFIX configuration is enabled for Distributed Firewall, firewall ports in the ESXi management interface for NetFlow for vDS or SNMP may be removed
When a collector IP and port is defined for IPFIX, the firewall for ESXi management interface is opened up in the outbound direction for the specified UDP collector ports. This operation may remove the dynamic ruleset configuration on ESXi management interface firewall for the following services if they were previously configured on the ESXi host:
- Netflow collector port configuration on vDS
- SNMP target port configuration
Workaround: To add the dynamic ruleset rules back, you must refresh the netFlow settings for vDS in the vCenter Web Client. You must also add the snmp target again using esxcli system snmp commands. This will need to be repeated if the ESXi host is rebooted after IPFIX configuration is enabled or if the esx-vsip VIB is uninstalled from the host.
Guest Introspection Issues
Old service virtual machines not functioning
Old service virtual machines that were left behind on hosts during host removal from the vCenter Server remain disconnected and are unable to function when the host is added back to the same vCenter Server.
Workaround: Follow the steps below
- Move the host from the protected cluster to either an unprotected cluster or outside all clusters. This will uninstall the service virtual machines from the host.
- Remove the host from the vCenter Server.