NSX-T Edge VM Design - Single N-VDS Edge VM on VDS with 4 pNICs

In a small Data Center, it's common to find clusters with management components such as vCenter, NSX Manager, vRealize Automation, vRealize Network Insight, vRealize Log Insight, and so on. This is what we called a "Shared" Management and Compute cluster. Edge VMs can be deployed in a shared cluster.

In the following diagram, the hosts are ESXi based and they have 4 pNICs available or more on each.
In this scenario, we are going to dedicated two (2) pNICs for management components and EdgeVMs. Two (2) other pNICs will be used for workloads traffic (or compute).

I use the following VLAN information in my setup for the Edge VM configuration.
  • Management VLAN: 599
  • vMotion VLAN: 598
  • TEP VLAN for Compute: 596
  • Uplink1 Trunk for Edge VM: 0-4094
  • Uplink2 Trunk for Edge VM: 0-4094

N-VDS Edge VM Diagram

We are going to configure the "Single N-VDS Edge VM Design on VDS with four (4) pNICs".
This design is available since NSX-T 2.5 release. The single N-VDS provides multi-TEP capabilities.

Compute Configuration

  • The host has four (4) pNICs available
  • A VDS for the host is deployed "Shared-VDS" for management and edge VMs
  • A N-VDS for the compute needs to be deployed "HOST-NVDS"
  • Two (2) pNICs are used for redundancy and load balancing on "Shared-VDS"
  • Two (2) pNICs are used for redundancy and load balancing on "HOST-NVDS"
  • A TEP IP pool and VLAN are defined for Compute and Edge VMs
  • The compute's Uplink Profile has the teaming policies (This is an example, you can adjust them)
    • Load Balance for Overlay traffic

Edge VM Configuration

  • An Edge VM has four (4) interfaces available
    • eth0 is dedicated to management traffic
    • fp-eth0 used for overlay and uplink1 traffic
    • fp-eth1 used for overlay and uplink2 traffic
    • fp-eth2 not used
  • A single N-VDS will be defined for the Edge VM
    • The "HOST-NVDS" N-VDS is used for the Overlay traffic and uplink traffic
  • The Edge VM's Uplink Profile
    • Load Balance for Overlay traffic using multi-TEP support
    • Failover for Uplink1 traffic (Primary U1, Standby None)
    • Failover for Uplink2 traffic (Primary U2, Standby None)
 The following diagram shows the details described above.



Step 1 - Transport Zones and N-VDS

One (1) N-VDS is created "HOST-NVDS" for this design. Three (3) Transport Zones are required: "Shared-Overlay-TZ" and "Edge-Uplink1-SingleTZ" are part of "Edge-Uplink2-SingleTZ".

Step 2 - Uplink Profiles

Two (2) uplink profiles are created:
  • Shared-Compute-2pNICs-2.4-Overlay for the Compute (or Transport Node)
  • Shared-Edge-2pNICs-2.4 for Edge VM's Overlay and Uplink traffic
Take a look on Transport VLAN ID in th red rectangle. The Compute and Edge VMs are using the same Transport ID 596.

Step 3 - Segment Creation

Here is the list of required segments for this setup.

 Steps 4 and 5 - Compute and Edge VM N-VDS Deployment

We can now deploy the N-VDS for the Compute and the Edge VM


The following picture shows two (2) Edge VMs deployed with success. Each Edge VM has one (1) N-VDS as mentioned above.

You can ping these Edge VMs from the Transport Node with the "vmkping ++netstack=vxlan" command.
Note: Because of the Multi-TEP support, each Edge VM has two (2) IP addresses for overlay traffic.

Enjoy your new NSX setup !