The following tests were performed on the SD-WAN enterprise topology built and configured in the previous article.
Internet Traffic – Link Fail
As a first test we simulate the fail of a link and observe the behavior of the traffic towards the internet. In order to do so, we launch a continuous ping from a Virtual PC connected in Branch1 LAN to Google DNS ip 188.8.131.52:
In the SD-WAN Rules section, the preferred output interface is Port1 because of the better performance in comparison to Port2:
By setting a sniffer on the firewall we can see in detail the icmp echo request packets directed to the host 184.108.40.206. As from the image reported below, packets arrive on port3 and, once processed they are forwarded from port1 with ip 220.127.116.11 as source nat. Responses follow the inverse path: icmp echo reply packets enter from port1 and are sent out from port3 to host 172.16.10.1
At this point we simulate a connectivity fail by turning off the router connected to port1. There is a contiguous ping to the ip 18.104.22.168 from the LAN VPC of Branch1: the icmp request will timeout as soon as router is turned off, but it will then start again.
In case of connectivity interruption, sla performances detect it and declare port 1 and all attached vpn interfaces as “dead,” while the other interfaces remain “alive.”
Then, the routing table updates and removes all “dead” state interfaces.
In the sdwan rules section the designated interface for internet traffic is then port2:
In fact, the sniffer on the firewall shows the icmp echo request packets directed towards the ip 22.214.171.124, which then arrive on port3 and are taken out of port2 with source nat on ip 126.96.36.199. Thus, the response packets follow the reverse path: icmp echo reply packets enter from port2 and are sent out from port3 to the host on ip 172.16.10.1
Once the router that is connected to port1 is restored, after a short period of time the traffic redirects back to this output interface without packet loss.
Traffic towards HUB – Interface Preference
The second test concerns the modification of the sdwan rule for the traffic addressed towards the DC; which establishes the certified link on port2 –therefore the VPN “WAN2-DC”– as preferred solution. The sample rule below defines “WAN1-DC” as chosen interface, and latency as configuration criterion.
If we ping the LAN interface of the FortiGate from the VPC in LAN on Branch1, we observe how the traffic is routed through the VPN tunnel.
At this point, we set the Performance SLA “pingDC” defined previously and set a target SLA:
In the SD-WAN rule section, we can set the strategy in Lowest Cost (SLA) and change the order of the interfaces by entering WAN2-DC as the preferred one. As SLA target we select “PingDC”.
Packets that were previously routed through the “WAN1-DC” VPN tunnel are directed to the DC through the new preferential tunnel:
The packets sniffer provides insight of how the routing changes, albeit it is completely transparent for the user (in this case the VPC).
At this point, if the connectivity fails on port2, the traffic redirects on the other tunnel with no packet loss.
Branch 2 Branch Traffic – Redundancy and Business Continuity
As a third test, we verify the communication between different Branches, which is established through the two HUBs. The SD-WAN rule that allows this traffic employs the link with less latency –WAN1-HQ in the sample– because of the better performance.
What would happen if HQ had such an infrastructural failure that the Firewall was unreachable? In order to test this situation, we turn off the HQ appliance and leave a continuous ping to Branch2 (ip 172.16.20.254):
The first two packets are lost due to the arp tables update; then, traffic flows correctly again. From the sniffer on Fortigate Branch1 we can observe the change of the output interface for icmp requests (from WAN1-HQ to WAN1-DC)
In the SD-WAN rule we can then notice that the selected interface is now just WAN1-DC.