VSS/VDS Migration to N-VDS with NSX-T

Usually, customers are familiar with VMware virtual switch such as vSphere Standard Switch (VSS) or vSphere Distributed Switch (VDS). With NSX Datacenter, we introduced a new version of virtual switch, named N-VDS. The reason why VMware took this decision is to decouple NSX from vCenter. For those who are already running NSX for vSphere (NSX-V), you should know that there is a 1:1 relationship between NSX-V Manager and vCenter where NSX-V leverages VDS to offer networking and security features. This is not the case anymore with NSX-T.

VSS/VDS/N-VDS Differences

Here this is a quick recap on definitions and differences.
  • The VSS contains both data and management planes at the individual host level.
  • The VDS architecture logically separates the data and management planes. The VDS is managed by vCenter.
  • The N-VDS takes advantages of both world. The N-VDS is decoupled from vCenter, meaning that NSX supports cross-platform (vSphere, KVM, Container, Clouds) and the N-VDS support VLAN and Overlay segments. The N-VDS is managed by NSX manager and can be applied per host or per cluster.

The following picture shows differences between VSS, VDS and N-VDS.
In which case do you need to migrate from VSS/VDS to N-VDS? The answer is pretty simple: When the host has a maximum of two (2) physical network interface cards (PNICs). Because you have limited hardware resources and high-availability is a requirement, you cannot unfortunately leverage your existing VSS/VDS and must migrate to N-VDS.

N-VDS Terminology

Physical Interface

 Like a traditional switch, the N-VDS must own one or multiple PNICs, and these latter cannot be shared with another virtual switch.

Transport Zone

The N-VDS supports two (2) types of traffic: VLAN and Overlay traffic.
  • A VLAN N-VDS can carry existing or new VLANs and attaches them to VMs.
  • A Overlay N-VDS encapsulates VM's traffic into UDP packets. This encapsulation is managed by the Geneve protocol.
All hosts (or Transport Nodes), sharing the same N-VDS, define a domain called Transport Zone.

There are some rules regarding Transport Zone, N-VDS and Transport Nodes (from the NSX-T Reference Design Guide - https://communities.vmware.com/docs/DOC-37591):
  • An N-VDS can attach to multiple VLAN transport zones. However, if VLAN segments belonging to different VLAN transport zones have conflicting VLAN IDs, only one of those segments will be “realized” (i.e. working effectively) on the N-VDS.
  • An N-VDS can attach to a single overlay transport zone and multiple VLAN transport zones at the same time.
  • There can be multiple N-VDS on the host, each attaching to a different sets of transport zones. 
  • Multiple virtual switches, N-VDS, VDS or VSS, can coexist on a transport node; however, a pNIC can only be associated with a single virtual switch.
  • A transport zone can only be attached to a single N-VDS on a given transport node.  
  • There can be multiple N-VDSs per the host, each attaching to a different sets of transport zones.

Uplink Profile

NSX-T introduces the concept of logical uplinks for N-VDS in order to provide features like LAGs and define teaming policies. The definition of N-VDS logical uplinks is done via uplink profiles. 

One logical N-VDS uplink can be mapped to a dedicated PNIC or be mapped to two (2) or multiple PNICs in LAG mode.

NSX-T supports two (2) different teaming policies: "Failover Order" and "Load Balance Source Port" (only on ESXi). For example, a host with two (2) PNICs can support active/active teaming policy ("Load Balance Source Port") for Overlay traffic and active/standby teaming policy ("Failover Order") for VLAN based traffic at the same time.

In addition of uplink definition and teaming policies, Uplink Profiles specify other parameters like:
  • MTU Size
  • Overlay transport VLAN

The following diagram shows two (2) different examples of uplink profile. One for ESXi Transport nodes where each uplink is mapped to a dedicated PNIC and supports "Load Balance Source Port" teaming policy, and another one for KVM transport nodes where each uplink is mapped to a dedicated PNIC and supports the "Failover Order" teaming policy. You can see also that both uplink policies use different VLANs ID for the overlay traffic.


A NSX-T Segment is a layer 2 broadcast domain. If we compare with VSS/VDS, a segment represents a Port Group (or DvPG) where the VMs are attached to. Depending the N-VDS type, a segment can be a VLAN backed segment or a Overlay backed segment.

We are now familiar with all N-VDS terminologies, let's see how to migrate from VSS/VDS to N-VDS.


Existing Environment

We have two (2) hosts (actually this is the same host where I did both VSS and VDS configuration)
  • One host running VSS: vSwitch0 with one (1) PNIC and two (2) Port Groups
    • The Management Network uses VLAN 599
    • The vMotion Network uses VLAN 598

  •  One host running VDS: VDS-NSXT-Collapsed with two (2) PNICs and two (2) DvPG
    • The Management Network uses VLAN 599
    • The vMotion Network uses VLAN 598



Step 1 - Transport Zone Definition 

We create two (2) Transport zones on the same N-VDS called "HOST-NVDS".
  • Compute-Overlay-TZ
  • Compute-VLAN-TZ

Step 2 - Uplink Profile

As described above, an uplink profile is required. This profile "Compute-2pNIC-Failover-Profile" defined 
  • The VLAN ID 596 for the Overlay traffic
  • The MTU size at 1600 bytes (default value)
  • The teaming policies - Failover Order is used in this example
  • There is no LAG configured
Note: [DEFAULT teaming] is for the Overlay traffic. "NAMED" teaming policies is for VLAN backed segments.

Step 3 - Segments

For the migration, we need to "map" existing Port Group or DvPG to NSX-T segments. In the following picture, you can see that two (2) VLAN backed segments have been created with the following information:
  • The Management Network uses VLAN 599
  • The vMotion Network uses VLAN 598
Note: both segments are part of  "Compute-VLAN-TZ" Transport Zone.

Step 4 - Transport Node

Let's configure the transport node ESXi. First Select the host (or can be by cluster) and Click on "Configure NSX"

Then Select:
- Transport Zones: "Compute-Overlay-TZ" and "Compute-VLAN-TZ"
- Uplink Profile: "Compute-2pNIC-Failover-Profile"
- NIOC Profile: default 
- LLDP Profile: Enabled or Disabled
- IP Pool: I assume that you have already created one. In my case, I created "VTEP-IP-Pool".
- Physical NICs: Select existing PNICs and map them to NSX-T logical uplinks defined in the "Compute-2pNIC-Failover-Profile" Uplink Profile
- Network Mapping for Install: click on mapping. This is where we map existing VMkernel Adapter to Segments. Here vmk0 is for management and vmk1 is for vmotion. These VMkernel Adapters will be migrate from Port Groups to NSX-T segments.


When you save and apply the NSX configuration to the transport node ESXi, you should see the following results:
- Configuration State = Success
- Node Status = Up
- Transport Zones applied
- N-VDS applied

Now let's see the result on the host itself.

VSS migration Results

As you can see the existing vmnic0 from VSS vSwitch0 is now migrated to the N-VDS "Host-NVDS" and VMKernel Adapters are now attached to NSX-T Segments.

VDS migration Results

Same results applied to the VDS "VDS-NSXT-Collapsed".
Now both vmnic0 and vmnic1 are associated to the N-VDS "HOST-NVDS" and VMKernel Adapters are now attached to NSX-T Segments.
I hope you enjoyed the article.
Next time, we will go through EdgeVM Design for NSX-T 2.4 and NSX-T 2.5
At the meantime, feel free to ask questions or put comments.