Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
paulinster
New Contributor III

Significant network performance drop

Hi Community,

Look like I am experiencing a significant throughput performance issue when I compare a iperf from Fortigate to a linux host. For examples, see some iperf snippet I below


FGT IPERF UPLOAD
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 9] 0.00-20.00 sec 1.57 GBytes 675 Mbits/sec 0 sender
[ 9] 0.00-20.00 sec 1.57 GBytes 675 Mbits/sec receiver

LINUX IPERF UPLOAD
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-20.00 sec 819 MBytes 343 Mbits/sec 490 sender
[ 4] 0.00-20.00 sec 816 MBytes 342 Mbits/sec receiver



FGT IPERF DOWNLOAD
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 9] 0.00-20.00 sec 2.13 GBytes 916 Mbits/sec 400 sender
[ 9] 0.00-20.00 sec 2.13 GBytes 916 Mbits/sec receiver

LINUX IPERF DOWNLOAD
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-20.00 sec 59.5 MBytes 25.0 Mbits/sec 282 sender
[ 4] 0.00-20.00 sec 58.9 MBytes 24.7 Mbits/sec receiver


The policy use for the taffic from linux host is the following, I have already disable all UTM feature

config firewall policy
edit 993
set srcintf "USERS"
set dstintf "ISP"
set srcaddr "users.4dc.host.netdisco"
set dstaddr "all"
set action accept
set schedule "always"
set service "ALL"
set logtraffic all
set ippool enable
set poolname "natpool-servers-general"
set nat enable
next
end

I can't figure out why that significant throughput drop. Any help would be appreciated

Thanx!
6 REPLIES 6
AEK
SuperUser
SuperUser

Hello

Which FGT model and FOS version are you using?

AEK
AEK
hbac
Staff
Staff

Hi @paulinster,

 

Is the Linux machine connected directly to the FortiGate? If not, please try to run iperf test when directly connected to FortiGate and see if you get the same result. 

 

Regards, 

paulinster
New Contributor III

 @hbac  and @AEK 

 

Hardware is a Fortigate 501E 

Untill Wednesday 22, it was running FortiOS 6.4.13. But then as per Fortinet support, I have rebooted the system and at the same time took the opportunity to upgrade to 7.0.13 

 

Unfortunately the problem is still present.  

 

I have done more testing with another system which I connected directly to the internet side so I can bypass the firewall.  Performance was awesone.  I was getting close to 1Gbps in both direction. but as soon as I move back the host to internal the performance drop for download as shown below

 

UPLOAD

[lpaulin@netdisco_{{DEV}} ~]$ iperf3 -c 184.73.12.142 -p 9089 -t 10
Connecting to host 184.73.12.142, port 9089
[ 4] local 10.250.52.192 port 60144 connected to 184.73.12.142 port 9089
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 60.3 MBytes 505 Mbits/sec 104 1.11 MBytes
[ 4] 1.00-2.00 sec 66.2 MBytes 556 Mbits/sec 0 1.18 MBytes
[ 4] 2.00-3.00 sec 62.5 MBytes 524 Mbits/sec 0 1.24 MBytes
[ 4] 3.00-4.00 sec 66.2 MBytes 556 Mbits/sec 0 1.27 MBytes
[ 4] 4.00-5.00 sec 72.5 MBytes 608 Mbits/sec 0 1.30 MBytes
[ 4] 5.00-6.00 sec 67.5 MBytes 566 Mbits/sec 0 1.31 MBytes
[ 4] 6.00-7.00 sec 70.0 MBytes 587 Mbits/sec 0 1.33 MBytes
[ 4] 7.00-8.00 sec 71.2 MBytes 598 Mbits/sec 0 1.37 MBytes
[ 4] 8.00-9.00 sec 71.2 MBytes 598 Mbits/sec 0 1.41 MBytes
[ 4] 9.00-10.00 sec 73.8 MBytes 619 Mbits/sec 0 1.45 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 682 MBytes 572 Mbits/sec 104 sender
[ 4] 0.00-10.00 sec 679 MBytes 570 Mbits/sec receiver

iperf Done.

 

DOWNLOAD
[lpaulin@netdisco_{{DEV}} ~]$ iperf3 -c 184.73.12.142 -p 9089 -t 10 -R
Connecting to host 184.73.12.142, port 9089
Reverse mode, remote host 184.73.12.142 is sending
[ 4] local 10.250.52.192 port 60334 connected to 184.73.12.142 port 9089
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 10.2 MBytes 85.3 Mbits/sec
[ 4] 1.00-2.00 sec 9.95 MBytes 83.5 Mbits/sec
[ 4] 2.00-3.00 sec 7.08 MBytes 59.4 Mbits/sec
[ 4] 3.00-4.00 sec 7.86 MBytes 66.0 Mbits/sec
[ 4] 4.00-5.00 sec 8.36 MBytes 70.1 Mbits/sec
[ 4] 5.00-6.00 sec 9.90 MBytes 83.0 Mbits/sec
[ 4] 6.00-7.00 sec 9.21 MBytes 77.3 Mbits/sec
[ 4] 7.00-8.00 sec 11.5 MBytes 96.5 Mbits/sec
[ 4] 8.00-9.00 sec 10.1 MBytes 85.1 Mbits/sec
[ 4] 9.00-10.00 sec 12.0 MBytes 101 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 96.9 MBytes 81.3 Mbits/sec 134 sender
[ 4] 0.00-10.00 sec 96.4 MBytes 80.9 Mbits/sec receiver

iperf Done.

 

 

I even when throuhg all interfaces and policies and remove all traffic shaping and policies to ensure it wasn't the issue. At this moment I should have no shaping profiles/policies.  

 

Any hint would be appreciated to help 

 

 

 

BillH_FTNT

Hi @paulinster 

I think you should test 2 more cases to find out something

1. Test UDP traffic through Firewall.

2. Test TCP traffic with multiple destinations, multiple ports.

Regards

Bill

AEK
SuperUser
SuperUser

Hi

  • Check if all related interfaces have negotiated 1Gb/s full duplex and not 100Mb/s
  • Check if there are rx/tx errors on the related FG interfaces
  • Check consumption % of FG CPU
AEK
AEK
Cajuntank
Contributor II

Not really adding much to this except to say, I battled some performance issues myself a few weeks ago and was also using the iperf on the FortiGate as one of my endpoints and saw some erroneous/questionable output. My true test was to test across the FortiGate, but from 2 devices (eg... linux server to MacBook, Windows PC to MacBook, etc...). Again some PCs or servers traversing the FortiGate, but not using the FortiGate's interface as one of the hosts in the test. I got a more true result doing this approach in my scenario (maybe due to having to test at 10Gb)...not sure.

Labels
Top Kudoed Authors