NSX for vSphere 6.0.7 Release Notes
NSX for vSphere 6.0.7 | 04 OCT 2014 | Build 2176282
What's in the Release Notes
The release notes cover the following topics:
NSX vSphere 6.0.7 contains patches for all NSX appliances. These patches address the BASH Shellshock security vulnerability. VMware recommends that you upgrade to this release. You can upgrade to this release from vCloud Networking and Security 5.5.x and NSX vSphere 6.0.x.
NOTE: NSX vSphere version 6.1.1 is now available. NSX customers should consider upgrading to version 6.1.1 to pick up important improvements.
System Requirements and Installation
VMware Product Interoperability Matrix provides details about the compatibility of current and previous versions of VMware products and components, such as VMware vCenter Server.
To upgrade to this release, follow the steps below.
- Back up your NSX Manager data and take a snapshot of NSX Manager.
- Uninstall Data Security from all clusters.
- Upgrade NSX Manager to the 6.0.7 release. For instructions, see NSX Installation and Upgrade Guide.
- Retrieve the controller IDs and IP addresses by logging to the vSphere Web Client. Navigate to Networking & Security > Installation. The NSX Controller Nodes table lists the controller IDs (Name column) and IP addresses (Node column) of each controller. You will need these when you take a backup of the controllers and upgrade them.
- Take a snapshot of the controller cluster from the specified controller node using the following REST API call.
The output of the GET call is an octet stream containing the controller snapshot. Example call to download the snapshot is as follows.
curl -u admin:default -H "Accept: application/octet-stream" -X GET -k https://NSXManagerIPAddress/api/2.0/vdn/controller/controllerID/snapshot > controller_backup.snapshot
If the upgrade process to NSX vSphere 6.0.7 is not successfull, call VMware Support for help in restoring the controller backup.
- Upgrade the controllers in your environment one at a time by following the steps below.
- Download the upgrade bundle file
VMware-NSX-Controller-upgrade-bundle-6.0.7-39278.nub from the Nicira portal and save it to a location such that the file is accessible through an SSH server.
- Set file permissions to ensure that the upgrade bundle file is readable and writeable:
chmod 644 VMware-NSX-Controller-upgrade-bundle-6.0.7-39278.nub
- Log in to the first controller using SSH by typing the following command in a terminal window:
ssh -l admin controllerIPAddress
See step 3 for information on retrieving controller IP addresses.
- Rename the file by typing the following in the terminal window:
copy file username@server:/Path_to_upgrade_bundle_file targetFileName
- Type the following command to complete the upgrade:
install software-update targetFileName
y at the prompt asking whether you want to continue.
After installation is complete, you are logged out of the ssh session. Wait for a few minutes and log back in to the controllers. Verify that the welcome banner shows the new build number (39278).
- Ensure that the upgraded controller is active by typing the following CLI command in a terminal window.
show control-cluster status
The output should show
Majority status as
Connected to cluster majority.
- Repeat steps c through f for each controller in your environment.
Navigate to the Controller UI to regenerate the certificate for each controller. For this, you need to delete each controller one at a time and add a replacement controller. Follow the steps below carefully.
If you have a single controller in your environment, do not follow the steps below and call VMware Support.
Ensure that you have the list of controller IDs and IP addresses from step 3.
- Go back to the Installation tab of the vSphere Web Client.
- Delete a controller by selecting it and clicking the Delete icon.
- Wait for a few minutes.
- Add a controller to replace the first controller by clicking the Add icon and completing the fields in the Add Controller dialog box. The Status column for the new controller shows Deploying.
Proceed to the next step only when the Status column for the newly added controller says Normal with a green check mark.
Also, note that the Software Version column for the new controller says 6.0 instead of 6.0.7.
- Repeat steps d, e, and f for each controller in your environment - one controller at a time.
Upgrade Guest Introspection to the 6.0.7 release. For instructions, see NSX Installation and Upgrade Guide.
Install Data Security on the appropriate clusters.
Update the host clusters in your environment and upgrade NSX Edge virtual machines to NSX vSphere 6.0.7. For instructions, see NSX Installation and Upgrade Guide.
If you upgrade an NSX Logical Router which has High Availability enabled on it, you must re-synchronize it with NSX Manager by selecting More Actions > Force Sync.
The known issues are grouped as follows:
Installation and Upgrade Issues
NSX Edge upgrade fails but NSX Manager does not rollback Edge appliances to older version
If NSX Edge upgrade fails and the system does not rollback, you may experience network disruption and may be unable to manage the Edge.
Workaround: Redeploy NSX Edge and then upgrade the Edge again.
SSL VPN client must be uninstalled and re-installed after upgrade
After upgrading to vShield 6.0.5, you must uninstall the SSL VPN client and then reinstall it. To install the latest client, go to https://ssl-vpn-ip-address where ssl-vpn-ip-address is the uplink IP address assigned to the Edge interface with which SSL VPN service is configured to listen on.
vSphere Distributed Switch MTU does not get updated
If you specify an MTU value lower than the MTU of the vSphere distributed switch when preparing a cluster, the vSphere Distributed Switch does not get updated to this value. This is to ensure that existing traffic with the higher frame size isn't unintentionally dropped.
Workaround: Ensure that the MTU you specify when preparing the cluster is higher than or matches the current MTU of the vSphere distributed switch. The minimum required MTU for VXLAN is 1550.
If all clusters in your environment are not prepared, the Upgrade message for Distributed Firewall does not appear on the Host Preparation tab of Installation page
When you prepare clusters for network virtualization, Distributed Firewall is enabled on those clusters. If all clusters in your environment are not prepared, the upgrade message for Distributed Firewall does not appear on the Host Preparation tab.
Workaround: Use the following REST call to upgrade Distributed Firewall:
Service virtual machine deployed using the Service Deployments tab on the Installation page does not get powered on
Workaround: Follow the steps below.
- Manually remove the service virtual machine from the
ESX Agents resource pool in the cluster.
- Click Networking and Security and then click Installation.
- Click the Service Deployments tab.
- Select the appropriate service and click the Resolve icon.
The service virtual machine is redeployed.
After upgrading NSX from version 6.0 to 6.0.x, NSX Edges are not listed on the UI
When you upgrade from NSX 6.0 to NSX 6.0.x, the vSphere Web Client plug-in may not upgrade correctly. This may result in UI display issues such as missing NSX Edges.
This issue is not seen if you are upgrading from NSX 6.0.1 or later.
Workaround: Follow the steps below.
This ensures deployment of the latest plug-in package.
- On the vSphere Server, navigate to the following location:
- Delete the following folders:
- Restart the vSphere Web Client service.
NSX vSphere CPU licenses as displayed as VM licenses
REST request fails with error
NSX vSphere CPU entitlements are displayed as VM entitlements in the vSphere Licensing tab. For example, if a customer has licenses for 100 CPUs, the UI displays 100 VMs.
HTTP/1.1 500 Internal Server Error
When Single Sign-On (SSO) is not configured correctly, all REST API calls fail with this message because NSX cannot validate the credentials.
Workaround: Configure SSO as described in the NSX Administration Guide.
When navigating between NSX Edge devices, vSphere Web Client hangs or displays a blank page
Workaround: Restart your browser.
vSphere Web Client displays error:
Cannot complete the operation, see the event log for details
When you install services such as vShield Endpoint or a partner appliance through the Service Deployments tab, the vSphere Web Client may display the above error. You can ignore this error.
Cannot remove and re-add a host to a cluster protected by vShield Endpoint and third-party security solutions
If you remove a host from a cluster protected by vShield Endpoint and third-party security solutions by disconnecting it and then removing it from vCenter Server, you may experience problems if you try to re-add the same host to the same cluster.
Workaround: To removed a host from a protected cluster, first put the host in maintenance mode. Next, move the host into an unprotected cluster or outside all clusters and then disconnect and remove the host.
NSX Manager Issues
NSX Manager does not restore correctly from backup
After NSX Manager is restored from backup, communication channels with Logical Router control virtual machine do not recover correctly. Because of this, logical switches and portgroups cannot be connected to and disconnected from the Logical Router.
Workaround: After restoring the NSX Manager backup, reboot all Logical Router control virtual machines.
vMotion of NSX Manager may display error
Virtual ethernet card Network adapter 1 is not supported
You can ignore this error. Networking will work correctly after vMotion.
Cannot delete 3rd party services after restoring NSX Manager backup
Deployments of 3rd party service(s) can be deleted from vSphere Web Client only if the restored state of NSX Manager contains the 3rd party service registration.
Workaround: Take a backup of NSX Manager database after all 3rd party services have been registered.
NSX Edge Issues
Enabling HA on a deployed Logical Router causes the router to lose its distributed routes on ESXi hosts
The Logical Router instance is deleted and re-created on ESXi hosts as part of the process of enabling HA. After the instance is re-created, routing information from the router's control virtual machine is not re-synced correctly. This makes the router lose its distributed routes on ESXi hosts.
Workaround: Reboot the Logical Router control virtual machine after enabling HA to restore the routes.
HA enabled NSX Logical Router does not redistribute routes after upgrade or redeploy
When you upgrade or redeploy an NSX Logical Router which has High Availability enabled on it, the router does not redistribute routes.
Workaround: After upgrading the NSX Logical Router, re-synchronize it with NSX Manager by selecting More Actions > Force Sync.
Load balancer pool member displays WARNING message
Even though a load balancer pool member shows a WARNING message, it can still process traffic. You can ignore this message.
Cannot configure untagged interfaces for a logical router
The VLAN ID for the vSphere distributed switch to which a logical (distributed) router connects cannot be 0.
Workaround: Create tagged interfaces only.
VDR LIF routes are advertised by upstream ESG even if VDR OSPF is disabled
Upstream Edge Services Gateway (ESG) will continue to advertise OSPF external LSAs learned from VDR connected interfaces even when VDR OSPF is disabled.
Workaround: Disable redistribution of connected routes into OSPF manually and publish before disabling OSPF protocol. This ensures that routes are properly withdrawn.
When HA is enabled on gateway, OSPF hello and dead interval configured to values less than 30 seconds and 120 seconds respectively can cause some traffic loss during failover
When the primary NSX Edge fails with OSPF running and HA enabled, the time required for standby to take over exceeds the graceful restart timeout and results in OSPF neighbors removing learned routes from their Forwarding Information Base (FIB) table. This results in dataplane outage until OSPF reconverges.
Workaround: Set the default hello/dead interval timeouts on all neighboring routers to 30 seconds for hello interval and 120 seconds for dead interval. This enables graceful failover without traffic loss.
HA configuration fails if you choose the same interface for HA management and L2 VPN configuration
If an NSX Edge has L2 VPN enabled and you try to enable HA on the same NSX Edge, the configuration may fail. There are two cases in which this can happen:
Workaround: Select a dedicated HA management interface that is different from the L2 VPN interface.
- If you manually select the same interface for HA management and L2 VPN.
- If you select automatic HA configuration, in which case HA may end up using the same interface as L2 VPN.
L2 VPN enabled NSX Edge may give inconsistent results if High Availability is enabled.
Workaround: Do not enable High Availability on an NSX Edge that has L2 VPN configured on it.