Hot!Really Poor SMB performance

Author
atsak
New Member
  • Total Posts : 16
  • Scores: 0
  • Reward points: 0
  • Joined: 2017/07/21 11:13:47
  • Status: offline
2019/03/04 12:17:31 (permalink)
0

Really Poor SMB performance

Fortinet to Fortinet, 100E to 60E, IPSec Tunnel, gigabit connection on the 100E and 400mbit on the 60E.
SMB transfers are slow, about 2 or 3mbps.
 
Have adjusted tcp-mss in the IPV4 policy for the indicated branch and on the IPSEC interface itself to 1306 (which is low but higher doesn't matter, when left at default it was fragmenting so I lowered it)
 
config sys interface
edit <interfacename>
set tcp-mss 1306
end
 
AND
config firewall policy
edit <policy number>
set tcp-mss-sender 1306
set tcp-mss-receiver 1306
end
 
(configured both legs of the firewall policy, inbound and outbound, on both firewalls)
 
Perhaps of note, IPSEC tunnels to Juniper firewalls perform normally (also have tcp-mss set to 1306) . . .
 
What setting am I missing?
#1

7 Replies Related Threads

    atsak
    New Member
    • Total Posts : 16
    • Scores: 0
    • Reward points: 0
    • Joined: 2017/07/21 11:13:47
    • Status: offline
    Re: Really Poor SMB performance 2019/03/06 11:45:33 (permalink)
    0
    Any help at all appreciated.  I suspect the issue is that the tcp-mss setting isn't taking, but I simply can't find any other places I can set it.
    #2
    Dave Hall
    Expert Member
    • Total Posts : 1360
    • Scores: 140
    • Reward points: 0
    • Joined: 2012/05/11 07:55:58
    • Location: Canada
    • Status: offline
    Re: Really Poor SMB performance 2019/03/06 12:45:38 (permalink)
    0
    Maybe related - see this post regarding disable asic and hmac offloading for ipsec.
     
    config sys global
    set ipsec hmac disable
    set ipsec asic disable
    end

    NSE4/FMG-VM64/FortiAnalyzer-VM/5.2/5.4 (FWF40C/FW92D/FGT200B/FGT200D/FGT101E)/ FAP220B/221C
    #3
    atsak
    New Member
    • Total Posts : 16
    • Scores: 0
    • Reward points: 0
    • Joined: 2017/07/21 11:13:47
    • Status: offline
    Re: Really Poor SMB performance 2019/03/06 12:47:23 (permalink)
    0
    Thanks - has anyone done this?  Does it interrupt service?
    #4
    atsak
    New Member
    • Total Posts : 16
    • Scores: 0
    • Reward points: 0
    • Joined: 2017/07/21 11:13:47
    • Status: offline
    Re: Really Poor SMB performance 2019/03/06 20:33:59 (permalink)
    0
    OK it does interrupt service but only for a second or so.   Also, it doesn't have any effect.  I in fact already had it set on both sides of one of the tunnels (from a 200E to a 100E).   Still only getting about 8mbit/sec on gig links . . .
     
    Any other possiblities?
    #5
    Sasha_FTNT
    New Member
    • Total Posts : 2
    • Scores: 0
    • Reward points: 0
    • Joined: 2019/03/08 06:38:06
    • Status: offline
    Re: Really Poor SMB performance 2019/03/08 07:03:52 (permalink)
    0
    Atsak, did you try latest IPS engine(interim one) where was made some SMB performace improvements?
    Please rise a ticket with support in order to provide you the latest interim IPS engine.
    BTW, which firmware is in problem?
    #6
    atsak
    New Member
    • Total Posts : 16
    • Scores: 0
    • Reward points: 0
    • Joined: 2017/07/21 11:13:47
    • Status: offline
    Re: Really Poor SMB performance 2019/03/08 12:25:50 (permalink)
    0
    IPS is disabled.
    A ticket is open with Support, they have not replied (two days too better follow up)
    Firmware is 5.6.3 build 1547 on both
     
    Really important to note - the issue only exists between 60E and the datacenter 100E.  IN offices where we have a Juniper, this doesn't happen, throughput is normal (maxes out CPU on the Juniper SSG 5's at around 40bmit).  This makes me think its a PMTU or tcp-mss setting, but I don't know which one to toggle to fix it.   This has to be a common problem I would think no?
    #7
    MdMan85
    New Member
    • Total Posts : 3
    • Scores: 2
    • Reward points: 0
    • Joined: 2018/10/01 11:08:28
    • Status: offline
    Re: Really Poor SMB performance 2019/03/13 05:37:23 (permalink)
    0
    If you find something please let me know as well. I've been looking for a long time and have come up empty handed. The one thing that has helped was enabling NAT on the tunnel but was barely noticeable. The command below was run on both ends (only effective if Fortinet to Fortinet)
     
    config vpn ipsec phase1-interface
    edit phase1name
    set nattraversal forced


    Hope this makes a difference for you.
    #8
    Jump to:
    © 2019 APG vNext Commercial Version 5.5