Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
Jadron
New Contributor

Problems with 5.4 VPN Tunnels

Hello, 

 

This IPSEC VPN is becoming a major issue for our clients and seems to be some sort of bug. Got a few dozen 60E's running FortiOS 5.4 in hub and spoke configuration.

 

Issue as follows:

-Usually a single user at a branch site all of a sudden can't access some critical server in the hub site.

-The user can't ping the server over the tunnel, server can't ping the user over the tunnel

-User can ping EVERY other server and resource over the tunnel just fine.

 

What we've discovered:

-If we disable the tunnel interface on both Hub and spoke, then clear ARP on both fortigates, then bring both interfaces online, problem solved.....for awhile. It will come back, with some random user withing a week or two. This is of course disruptive to do in the middle of a day.

-We've discovered changing the remote branch user's local IP will "solve" the communication issue. And verified this is not an ip conflict issue. This is a quick and dirty fix, but is not a real solution. Sometimes better than doing the above fix as it wont bring the tunnel down.

-If you run a trace route from the affected system to the server it cant reach the hops after the local fortigate seem to attempt external networks....trace routing a server it can reach over the tunnel looks normal and hops through both internal interfaces of each spoke and hub unit. 

 

As mentioned this has been happening at multiple clients we've implemented the same hub and spoke solution for. Tunnels have been configured differently, or rebuilt in some cases but the issue still seems to occur. 

 

Any suggestions or help is appreciated.

 

 

 

 

 

3 REPLIES 3
Toshi_Esumi
SuperUser
SuperUser

Just a random guess but would the symptom stop happening if you disable npu-offload in the phase1-int config on both sides of the tunnel like in the manual description below?

config vpn ipsec phase1-interface     edit phase-1-name         set npu-offload disable end
Jadron

We may have a solutions. This is happening to different client 4 clients of ours with similar setups.

 

Front Runner solution:

We contacted Fortinet support, and a tech had us place the following config via CLI on the remote router's Policy that was doing LAN -> Remote site for the IPSec tunnel. (Only the lan->remote site, not the return policy):

set tcp-mss-sender 1300

set tcp-mss-recevier 1300

This seems to have solved it on that instance. We don't know exactly why this seems to be the fix.

 

We will give it shot at the other clients when the issue occurs again. 

 

Toshi_Esumi

What they're suspecting is it's caused by fragmentation. Previously larger packets were dropped at NPU for xxD models, but it was fixed by 5.4.5. But still seems to have some more issues related to fragmentation. Since 60E has a new ASIC chip the above problem doesn't apply but I guessed some new problems might exist because of that, then asked you to try disabling NPU off-loading.

TCP MSS adjustment works only for TCP traffic to avoid fragmentation, not for UDP or ICMP. Just keep it mind.

Labels
Top Kudoed Authors