Posts

NSX-T Edge VM Design - Decision Flow

NSX-T Edge VM Design is not an easy topic. Simply because every customer has his own technical requirements and their environment differs according to the number of physical servers, the number of VMs, and the storage solution in place. Note : Edge Design on Bare Metal Server is not addressed in this article. Questions to ask? Before starting the NSX-T Edge VM Design, there are some questions that you need to answer: How many physical servers or hosts do I have in my environment? How many physical network interface cards (pNICs) are available on each host? Do I have IP Storage or vSAN? Which NSX-T release do I run today? Why these questions are important before deploying NSX-T Edge VMs? Because as I mentioned we have different options to configure the host where Edge VMs will be deployed. These options depend on the following criteria: Data Center Size - VMware recommends to dedicate a cluster for Edge VMs. But there is a cost associated (CAPEX and OPEX) and some custo

NSX-T Edge VM Design - Single N-VDS Edge VM on N-VDS with 4 pNICs

Image
In a small Data Center, it's common to find clusters with management components such as vCenter, NSX Manager, vRealize Automation, vRealize Network Insight, vRealize Log Insight, and so on. This is what we called a "Shared" Management and Compute cluster. Edge VMs can be deployed in a shared cluster. In the following diagram, the hosts are ESXi based and they have 4 pNICs available or more on each. In this scenario, we are going to dedicated two (2) pNICs for management components and two (2) other pNICs for workloads traffic (or compute) and Edge VMs. I use the following VLAN information in my setup for the Edge VM configuration. Management VLAN: 599 vMotion VLAN: 598 TEP VLAN for Compute: 596 TEP VLAN for Edge VMs: 595 Uplink1 Trunk for Edge VM: 0-4094 Uplink2 Trunk for Edge VM: 0-4094 N-VDS Edge VM Diagram We are going to configure the "Single N-VDS Edge VM Design on N-VDS with four (4) pNICs". This design is available since NSX-T

NSX-T Edge VM Design - Single N-VDS Edge VM on VDS with 4 pNICs

Image
In a small Data Center, it's common to find clusters with management components such as vCenter, NSX Manager, vRealize Automation, vRealize Network Insight, vRealize Log Insight, and so on. This is what we called a "Shared" Management and Compute cluster. Edge VMs can be deployed in a shared cluster. In the following diagram, the hosts are ESXi based and they have 4 pNICs available or more on each. In this scenario, we are going to dedicated two (2) pNICs for management components and EdgeVMs. Two (2) other pNICs will be used for workloads traffic (or compute). I use the following VLAN information in my setup for the Edge VM configuration. Management VLAN: 599 vMotion VLAN: 598 TEP VLAN for Compute: 596 Uplink1 Trunk for Edge VM: 0-4094 Uplink2 Trunk for Edge VM: 0-4094 N-VDS Edge VM Diagram We are going to configure the "Single N-VDS Edge VM Design on VDS with four (4) pNICs". This design is available since NSX-T 2.5 release. The si

NSX-T Edge VM Design - Multiple N-VDS Edge VM on VDS with 4 pNICs

Image
In a small Data Center, it's common to find clusters with management components such as vCenter, NSX Manager, vRealize Automation, vRealize Network Insight, vRealize Log Insight, and so on. This is what we called a "Shared" Management and Compute cluster. Edge VMs can be deployed in a shared cluster. In the following diagram, the hosts are ESXi based and they have 4 pNICs available or more on each. In this scenario, we are going to dedicated two (2) pNICs for management components and EdgeVMs. Two (2) other pNICs will be used for workloads traffic (or compute). I use the following VLAN information in my setup for the Edge VM configuration. Management VLAN: 599 vMotion VLAN: 598 TEP VLAN for Compute and Edge VMs: 596 Uplink1 VLAN for Edge VM: 10 Uplink2 VLAN for Edge VM: 20 N-VDS Edge VM Diagram We are going to configure the "Multiple N-VDS Edge VM Design on VDS with four (4) pNICs". This design is required for NSX-T 2.4 release and below.

NSX-T Edge VM Design - Single N-VDS Edge VM on N-VDS with 2 pNICs

Image
In a small Data Center, it's common to find clusters with management components such as vCenter, NSX Manager, vRealize Automation, vRealize Network Insight, vRealize Log Insight, and so on. This is what we called a "Shared" Management and Compute cluster. Edge VMs can be deployed in a shared cluster. In the following diagram, the hosts are ESXi based and they have only 2 pNICs available on each. I use the following VLAN information in my setup for the Edge VM configuration. Management VLAN: 599 vMotion VLAN: 598 TEP VLAN for Compute: 596 TEP VLAN for Edge VM: 595 Uplink1 Trunk for Edge VM: 0-4094 Uplink2 Trunk for Edge VM: 0-4094 N-VDS Edge VM Diagram We are going to configure the "Single N-VDS Edge VM Design on N-VDS with two (2) pNICs". This design is available since NSX-T 2.5 release. The single N-VDS provides multi-TEP capabilities. Compute Configuration The host has only two (2) pNICs available A N-VDS for the host is already de

NSX-T Edge VM Design - Multiple N-VDS Edge VM on N-VDS with 2 pNICs

Image
In a small Data Center, it's common to find clusters with management components such as vCenter, NSX Manager, vRealize Automation, vRealize Network Insight, vRealize Log Insight, and so on. This is what we called a "Shared" Management and Compute cluster. Edge VMs can be deployed in a shared cluster. In the following diagram, the hosts are ESXi based and they have only 2 pNICs available on each. I use the following VLAN information in my setup for the Edge VM configuration. Management VLAN: 599 vMotion VLAN: 598 TEP VLAN for Compute: 596 TEP VLAN for Edge VM: 595 Uplink1 VLAN for Edge VM: 10 Uplink2 VLAN for Edge VM: 20 N-VDS Edge VM Diagram We are going to configure the "Multiple N-VDS Edge VM Design on N-VDS with two (2) pNICs". This design is required for NSX-T 2.4 release and below. Compute Configuration The host has only two (2) pNICs available A N-VDS for the host is already deployed " HOST-NVDS " Both pNICs are used for

Identity-based firewall for RDSH with VMware NSX-T 2.4

Image
After a long break in blogging, I am back to share my experience with VMware NSX Datacenter. Today, we talk about Context-Aware Micro-Segmentation by implementing the NSX Identify-based Firewall (IDFW) feature. IDFW allows an administrator to create Active Directory user-based Distributed Firewall (DFW) rules. It can be used for Virtual Desktops (VDI) or Remote desktop session (RDSH support). This post describes how to configure NSX Datacenter with IDFW in order to allow user access to a specific application via Remote Desktop Session Host (RDSH) systems. Topology  First, let's take a look at the lab topology. We have two (2) users: Bob from the HR team and Charlie from the DevOps team. Both are using the same Remote Desktop host to reach their respective applications IceHRM and Planespotter via HTTP and HTTPS. On the network side, we have a NSX Tier0 Gateway connected to: NSX Tier1 Gateway to access to the following applications: HR application (172.16.10