NSX Cloud. Part 4 : Deploy PCG
NSX Cloud. Part 4 : Deploy PCG
In previous posts, we did all necessary preparations on AWS and Azure sides, as well as completed our routing setup. Now we will be deploying Public Cloud Gateways (PCG) into AWS and Azure. We will start as usual with AWS
- Login to CSM and go to Clouds–>AWS–>VPCs. Choose your VPC and it should look like this. Both VPCs that we defined and they have a sign that says “No Gateways”. There will be gateway soon 🙂
- One small step that we need to do is to generate key pairs in our AWS EC2 section. The public key will be utilized later during PCG installation as well as accessing it in the future for management. Navigate to EC2 and click on Key Pairs. Then Click Create Key Pair. This will download the .pem file on your desktop. Save it as it will be needed in future
- Under Actions Click Deploy NSX Cloud Gateway
- Choose .pem file that we created in step 3. In Advanced select “Override Public Cloud Provider’s DNS Server” and indicate your internal. This is needed to make sure that PCG can resolve FQDN of your NSX-T Manager.Click Next
- In production, it is highly recommended to check HA option. In our environment, we will be deploying a single PCG. Indicate AZ and subnets that were created earlier. Allocate New IP for both MGMT and uplink interfaces and click Deploy
- It takes around 10-15 minutes for the deployment to finish. Check status and confirm that our Transit VPC shows “Deployed Gateway” with a green checkmark. Note that it says for now “Self-Managed”. I will explain that next
- You can now either install PCG in Compute VPC, which is not scalable. In new release 2.4 we can ‘share’ our PCG with other VPCs, thus creating transit-compute relationship. Up to 10 Compute VPCs can utilize single PCG with current release. Under “Actions” for Compute VPC click “Link to Transit VPC”
- Indicate VPC that will be used as transit, i.e where PCG is already deployed. Click Next. Note there has to be routing in place between your VPCs, which we have established in the previous part
- Check back and we can now see Transit-Compute Relationship between our VPCs
Now you got the idea of this. Instead of installing PCG in every VPC, you can only install it in Transit VPC and then link all other VPCs , so they can utilize it, thus creating transit-compute relationship
Let’s explore to see what objects have been created during PCG deployment. Let’s start with AWS side
- Login to AWS Console and navigate to EC2 page. Then go to Security GroupsTotal 13 Security groups have been created:
3 for PCG ( in blue): this is used for PCG managed and applied to each interface accordingly (mgmt, uplink, downlink) ;
5 for VPC-1 (in red)
5 for VPC-2 (in green)
vm-underlay-sg and vm-override-sg are the ones that are important to mention and that will be utilized by our future workloads. The rest security groups are currently not being in use as for current 2.4 release of NSX-T
- Go to EC2 Running instances and there will be PCG workload running with allocated interfaces and Elastic IP addresses you can see there are 3 primary IP addresses allocated for each of interfaces (mgmt, uplink, downlink) and additional secondary IP addresses that will be utilized for SNAT and local endpoint for VPN (will be discussed later).
- Navigate to Route 53 and we will see one zone has been created called vmware.local that should have A record for NSX-GW which points to the management IP address of PCG. This is needed to resolve the hostname of PCG when installing NSX Tools (NSX Agent) into workloads during the onboarding process, which we will discuss in the next part.
Now let’s review objects that have been created on NSX-T Manager side
- Login to NSX-T Manager and go to System–>Fabric–>Transport Zones
Two TZs have been created: one for VLAN traffic, one for Overlay
- Go to Nodes–>Edge Transport Nodes. This corresponds to our PCG itself being created as TN
- Go to Networking–>Tier 0 Gateways. This is Tier-0 router which gets created when you deploy PCG
- Go to Tier 1 Gateways. Each VPC will have it’s own Tier-1 Gateway. In our case there are two VPCs, so two T1s are created and attached to Tier0 automatically
- Navigate to Inventory–>Domains. One domain gets created to group your future firewall rules
- Navigate to Groups. Several groups get created to combine workloads depending on their VPC, provide access to native cloud metadata services and default route
- There is a concept called Forwarding Policies which applies only to NSX Cloud setup. This is essentially your policy-based routing, where you can influence routing behavior depending on the source/destination of the traffic. Navigate to Networking–> Forwarding Policies. You will see default policies that were created during the setup.The first one tells to route all traffic from All Cloud Segments (matched using group that we saw earlier) towards cloud metadata services using underlay, i.e native cloud routing.
Second and third policies route all local segment traffic destined to respective local CIDR block using underlay
The last policy says that anything else should be routed using an overlay, i.e our NSX-T Manager.
- Navigate to Security–>Distributed Firewall.
Couple rules get created during the PCG setup. The first one on top allows DHCP offers within all VPCs. This is a stateless section
In the stateful section we are allowing E-W traffic within each local VPC segment, while anything else destined towards VPC local segments will be dropped.
- Go to Gateway Firewall. This your N-S traffic
Here default outbound rule for all traffic going OUT will be allowed (and return traffic part of the same session as well), while default inbound rule for all traffic coming IN will be dropped.
This concludes the deployment and verifications of PCG on the AWS side. We will move now to Azure
- Login to CSM and go to Clouds–>Azure–>VNets.Both VNets that we defined and they have a sign that says “No Gateways”.
- One small step that is required is to generate private/public keys. This will be used to access PCG using SSH in the future. The simple way is to use ssh-keygen on Mac
$ ssh-keygen -t RSA Generating public/private RSA key pair. Enter file in which to save the key (/Users/nizami1/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /Users/nizami1/.ssh/id_rsa. Your public key has been saved in /Users/nizami1/.ssh/id_rsa.pub. The key fingerprint is: SHA256:jqNLrKm2xrwFACu4zf8p+Wc/rKM+m76qfPKM/BJ0FSg nizami1@Nizamis-MacBook-Pro.local The key’s random art image is: +—[RSA 2048]—-+ |. … | |o.E . . | |= . . | |o+. . | |.oo. S | | oo o | |o o+.o .. | | B+*+o.++ o | |++XBBB@B.+.. | +—-[SHA256]—–+
You will need a public key(id_rsa.pub) to input during PCG installation
- For VNET1 under Actions Click Deploy NSX Cloud Gateway
- Paste your public key into the SSH Public Key area
- Under Advanced setting indicate your internal DNS server. This is needed for NSX-T Managed FQDN resolution. Click Next
- In production, it is highly recommended to check HA option. In our environment, we will be deploying a single PCG. Indicate subnets that were created earlier. Allocate New IP for both MGMT and uplink interfaces and click DeployNote NTP sync is crucial as deployment might fail if your CSM is not synced with NTP. Please be sure to confirm that time and date are correct before proceeding with this step
The process consists of copying the PCG image into a storage account bucket (specified earlier) first. After the copy is finished, PCG gets deployed from that image.
- Verify that Gateway status shows deployed similar to the image below
- We will do the same design and link our other VNET to Transit VNET where we have deployed PCG. For VNET2 click Actions–>Link to Transit VNET
- Indicate VNET1 as Transit VNET and click Next
- Verify that Transit-Compute relationship is in place
Objects that have been created on NSX-T Manager will be the same as in the case of AWS ( will have different names obviously). On Azure side let’s verify what has been created
- Login to Azure console and go to Network Security Groups
3 groups are created for PCG and attached to each interface respectively (mgmt,uplink,downlink)
From other groups the following will be utilized : vm-override-sg , vm-underlay-sg , vm-quarantine-sg (will be discussed later)
vm-overlay-sg is not being in used at the moment
- Go to Virtual Machines and find NSX-GW1 and click on networking
it has three interfaces for mgmt, vtep(downlink) and uplink. Alternatively, this can be checked by going to “Network Interface” resource from top search
This concludes this part for deploying PCGs in both AWS and Azure. In the next part we will start to on-board actual workloads. Stay tuned..
Author- Nizami Mammadov CCIE#22247, VCAP-NV