A hub-spoke network architecture includes a central virtual network, called a "hub" network, and a series of remote networks, connected to the hub called "spoke" networks.
A VPN gateway or Express-Route is configured in the hub, through which traffic passes to and from remote networks, which can be other virtual networks in turn or on-premises networks.
The hub-spoke infrastructure is widely used in hybrid architectures because it offers several advantages:
Allows you to centralize some services that need to be distributed to loads in the various networks connected to the hub
Allows you to logically isolate resources in different networks while maintaining connectivity with the hub and with each other
Allows traffic to transit from Azure to the on-premises network through the hub gateway instead of having to configure a gateway for each spokes virtual network
Through virtual network peering to the hub network, spokes networks "inherit" the gateway routes to remote networks (virtual or on-premises) that are connected directly via gateway, without having to create specific routes on each spokes.
To secure the cloud network infrastructure and have greater control over traffic, you can insert inside the hub network or a third-party NVA by purchasing it directly from the Azure marketplace (there are products from different vendors) or Azure Firewall which is a fully cloud-native, highly available and scalable platform, with an advanced threat protection and traffic control system.
Links for further information:
In this post we will see how to integrate Azure Firewall into a hub-spoke hybrid network, what adjustments you need to make to properly route traffic through the firewall and how to troubleshoot when you encounter connectivity problems.
Current architecture
We have already set up a hub-spoke hybrid network with the following structure:
a hub virtual network:
az-ds-hub-vnet - IP: 10.7.0.0/16
Subnets:
GatewaySubnet - IP: 10.7.200.0/26, dedicated to the VPN gateway
AzureFirewallSubnet - IP: 10.7.1.0/24, dedicated to Azure Firewall
Infra - IP: 10.7.0.0/24 with a VM:
"hubvm01" - IP: 10.7.0.4
two spoke virtual networks:
az-ds-vnet-spoke01 - IP: 10.10.0.0/16 with a VM:
"Spoke01" - IP: 10.10.0.4
az-ds-vnet-spoke02 - IP: 10.20.0.0/16 with two VMs:
"Spoke02" - IP: 10.20.0.4
"SQLVM02" - IP: 10.20.0.5
an on-premises network with IP range 192.168.20.0/24 connected to Azure
a laptop device with address 192.168.20.40
a VPN gateway connected to the local network with a site-to-site connection
Peering between spoke networks and the hub with gateway transit enabled.
two Route Tables associated with spokes networks to route traffic to the other spoke network via the gateway:
The important thing here is to set the "Propagate Gateway routes" option to Yes, so that the routes to the on-premises network propagate to the spokes via network peering.
In fact, if you take a VM located in one of the spoke networks, and you observe under Effective routes you see that there is in the routing table a route with source "Virtual Network Gateway" to the local network 192.168.0.0/16, even if we never configured it explicitly.
As you can see with the current setting there is connectivity between the two spokes networks:
and also from the laptop on the local network to Azure:
Azure Firewall
Let's create the Azure Firewall
And once the deployment is complete, let's take note of the private IP of the firewall.
Azure traffic
At this point we are going to modify the two routes associated with the spokes networks by modifying the next-hop from the Virtual Network Gateway to the Firewall.
Let's change "Propagate gateway routes" to No, we don’t want them to propagate from the gateway but we want to force the passage on the firewall.
Repeat this step on both spokes networks.
Let's test again the communication between the Spoke01 and Spoke02 machines:
As you can see it goes into time out.
Since before connectivity was present because we used the gateway to pass traffic and take advantage of the two peering, now the communication no longer passes through the gateway but passes through the firewall where we have not yet configured any rules.
You can also see from the Effective routes associated with the Spoke01 vm, the route with Virtual Network Gateway source is gone, and the user-defined route indicates the firewall as Next Hop.
Let's set a firewall rule to allow traffic between the two spokes networks.
On the firewall policy associated with the firewall, on Network rule, create a new rule collection:
and we define two rules:
The first "AllowICMP-to-Spoke01" will have as source the address range of the spoke network "az-ds-vnet-spoke02", Protocol ICMP in this case, Destination the address range of the first spoke network "az-ds-vnet-spoke01".
The second will be specular but will go from the network "az-ds-vnet-spoke01" to "az-ds-vnet-spoke02".
Let's test the ping again:
Ping successfull.
On-premises traffic
For on-premises connectivity we have to create a new route table with two routes and associate it with the hub network, GatewaySubnet subnet:
"hub-to-spoke01", Address Prefix 10.10.0.0/16, next-hop Virtual Appliance 10.7.1.4
"hub-to-spoke02", Address Prefix 10.20.0.0/16, next-hop Virtual Appliance 10.7.1.4
and update the routes associated with the spoke networks, inserting the local network:
Again, let's enable transit via firewall policy.
We choose a different port, for example, 3389 for remote desktop.
The connection works:
Troubleshooting
When you experience problems or fail to understand the lack of connectivity, a very useful tool is Network Watcher.
Network Watcher is available from the Azure portal as a network monitoring and diagnostic tool and offers a series of tools such as:
Next-Hop
IP flow verify
NSG diagnostic
Effective Security rules
Packet Capure
Connection Troubleshoot
Next-Hop is useful for identifying the next router that the data packet passes through to reach a certain destination.
For example, to reach the local network starting from the vm in the network "az-ds-vnet-spoke01" in our architecture, the next-hop is Azure Firewall because we have correctly set the routes.
So if we notice a problem we will go directly to verify that the firewall allows transit and is not preventing the passage.
Connection Troubleshoot, on the other hand, verifies a direct TCP connection from a VM to a VM.
Suppose we want to check if the SQLVM02 machine, which hosts a SQL Server on the spoke network "az-ds-vnet-spoke02", receives correctly on SQL port 1433 from the Spoke01 VM.
Click Check and wait for the result:
You see that there is the network path from Spoke01 --> Firewall --> SQLVM02 but as Status from "Unreachable".
We don’t have in the firewall policy a rule that allows traffic on port 1433.
Let's insert it.
Repeat the test, the status is now "Reachable".
Conclusions
In this post we have seen the benefits of a hub-spoke architecture, how to configure Azure Firewall and route traffic without loss of connectivity, and how to monitor and troubleshoot with Network Watcher.
Comments