NSX-T Edge VM Design - Multiple N-VDS Edge VM on VDS with 4 pNICs

In a small Data Center, it's common to find clusters with management components such as vCenter, NSX Manager, vRealize Automation, vRealize Network Insight, vRealize Log Insight, and so on. This is what we called a "Shared" Management and Compute cluster. Edge VMs can be deployed in a shared cluster.

In the following diagram, the hosts are ESXi based and they have 4 pNICs available or more on each.
In this scenario, we are going to dedicated two (2) pNICs for management components and EdgeVMs. Two (2) other pNICs will be used for workloads traffic (or compute).

I use the following VLAN information in my setup for the Edge VM configuration.
  • Management VLAN: 599
  • vMotion VLAN: 598
  • TEP VLAN for Compute and Edge VMs: 596
  • Uplink1 VLAN for Edge VM: 10
  • Uplink2 VLAN for Edge VM: 20

N-VDS Edge VM Diagram

We are going to configure the "Multiple N-VDS Edge VM Design on VDS with four (4) pNICs".
This design is required for NSX-T 2.4 release and below.

Compute Configuration

  • The host has four (4) pNICs available
  • A VDS for the host is deployed "Shared-VDS" for management and edge VMs
  • A N-VDS for the compute needs to be deployed "HOST N-VDS"
  • Two (2) pNICs are used for redundancy and load balancing on "Shared-VDS"
  • Two (2) pNICs are used for redundancy and load balancing on "HOST-NVDS"
  • A TEP IP pool and VLAN are defined for Compute and Edge VMs
  • The compute's Uplink Profile has the teaming policies (This is an example, you can adjust them)
    • Load Balance for Overlay traffic

Edge VM Configuration

  • An Edge VM has four (4) interfaces available
    • eth0 is dedicated to management traffic
    • fp-eth0 used for overlay traffic
    • fp-eth1 used for uplink1 traffic
    • fp-eth2 used for uplink2 traffic
  • Three (3) N-VDS will be defined for the Edge VM
    • A dedicated N-VDS for the Edge VM Overlay traffic "HOST-NVDS"
    • A dedicated N-VDS for the Edge VM uplink1 "Edge-Uplink1-NVDS"
    • A dedicated N-VDS for the Edge VM uplink2 "Edge-Uplink1-NVDS"
  • Because we use dedicated Edge VM interface, the teaming policy is the same for all traffic:
    • Failover for TEP traffic (Primary U1, Standby None)
    • Failover for Uplink1 traffic (Primary U1, Standby None)
    • Failover for Uplink2 traffic (Primary U2, Standby None)
 The following diagram shows the details described above.



Step 1 - Transport Zones  and N-VDS

Three (3) N-VDS are created for this design:

Step 2 - Uplink Profiles

Four (4) uplink profiles are created:
  • Shared-Compute-2pNICs-2.4-Overlay for the Compute (or Transport Node)
  • Shared-Edge-2pNICs-2.4-Overlay for Edge VM Overlay interface
  • Shared-Edge-2pNICs-2.4-Uplink1 and Shared-Edge-2pNICs-2.4-Uplink2 for both Edge VM uplink interfaces

Step 3 - Segment Creation

Here is the list of required segments for this setup.

 Steps 4 and 5 - Compute and Edge VM N-VDS Deployment

We can now configure N-VDS for the Edge VM configuration and the Compute N-VDS.


The following picture shows two (2) Edge VMs deployed with success. Each Edge VM has three (3) N-VDS as mentioned above.

You can ping these Edge VMs from the Transport Node with the "vmkping ++netstack=vxlan" command.

Enjoy your new NSX setup !