Unless someone can say otherwise, I do not think you need to worry about the duplex/speed as an issue the diag tests does not show there being a problem with them - otherwise you would get various rx/tx counter errors, that would increase over time. That said, if you want to set/force the duplex/speed on an interface, you can do this via the CLI: config system interface edit <interface name> set speed ? nextendwhere ? is:auto Automatically adjust speed.10full 10M full-duplex.10half 10M half-duplex.100full 100M full-duplex.100half 100M half-duplex.1000full 1000M full-duplex.
What I mean by setting the ingress/egress values on both ISP connections is to set values for "Estimated Bandwidth" on each Interface.
Later fgt firmware versions come with some nice SD-WAN settings/monitoring tools. I would make sure that the all WAN interfaces have the proper default route, distance/metric, and you have setup the load-balancing (aka SD-WAN Rules). The SD-WAN monitor will tell you how many sessions are open/going out which ISP connection.
If you do not have a bandwidth history graph on the main dashboard, I suggest adding two (one for each ISP connection). I would monitor the bandwidth usage, and CPU, memory, and sessions. The fgt will (should) go into conserve mode should memory usage go near/over 80%.
If you have direct access to the ISP gateway devices, I would log into each device and check for any log or events. Sometimes one side of that WAN connection may look fine, but the other side may tell a different story.
If you have ping watch guard settings enabled (under Performance SLA) you will likely want to confirm they are working as expected. If you are using Google's DNS there is rate limits set on how often you can ping their DNS servers.
And of course you should check the System Events/Router Events (under Log & Report) for issues.