Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
claytonmeyer
New Contributor

New FG1000D HA Deployment Using Nexus VPC Troubleshooting

Hi all, I'm deploying a new pair of 1000D in active/active configuration. Outside interface's are connecting to a single ISP & seem to be working fine. The issue I'm having is communication between our core Nexus 9K's & the 1000D's. This is a multi-tenant environment and therefore we are leveraging VDOM's on the FG & VRF's on the 9K's. 

 

I'm using individual /29 networks between the FG & 9K's to route. We are also using LACP on the FG & VPC on the pair of 9K's. The 9K's are not new & have been in production for quite some time. They are configured according to Cisco best practices. 

 

The issue we ran into was that we were unable to communicate properly between the core switches & the firewalls on one of the /29 networks. After working with Fortinet Support for hours, I was told that best practices was for a L2 switch to be physically installed between the firewalls and our core switches. This apparently is due to some requirement for HA configuration. Even though this wasn't clear to me, we did try this method and it didn't resolve the issue.

 

The physical ports on the FG are configured like this. 802.3ad port is in the root VDOM with no L3 config. The underlying VLAN interfaces are in a unique VDOM. These VLAN interfaces are what use the /29 network to route to the Nexus 9K's. Is there something about this design that doesn't seem right? Any ideas what might be causing this problem? Running 5.6.4 code.

 

 

1 Solution
ede_pfau

Cisco Nexus internally use non-standard Ethertypes - the same which FortiOS uses on the HA links. This is documented in the [strike]KB.[/strike] HA chapter of the 'FortiOS Handbook'.

To avoid trouble you can change the Ethertype in the HA setup ('conf sys ha'). 3 different types are used.

 

Your routing / connectivity problems could be a side-effect of this.


Ede

"Kernel panic: Aiee, killing interrupt handler!"

View solution in original post

Ede"Kernel panic: Aiee, killing interrupt handler!"
13 REPLIES 13
Toshi_Esumi
SuperUser
SuperUser

To me it should work. But for our similar case, we use a /30 subnet on the vlan to connect a vdom to each customer' VRF because it's a-p setup. We don't have any Nexus SW in the mix though.

emnoc
Esteemed Contributor III

That should be doable, the /29or /30 is not revelent. Since you mention NXOS are they acting liek l3 routers? if so per cisco this will not work ideally for dynamic routing protocols

 

read this and  review if anything here is in your  design.

 

http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/

 

 

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
claytonmeyer

The only reason we're using a /29 rather than a /30 is we need additional IP addresses to accommodate HSRP (VIP) on Nexus 9K's. The 9K's are our core switches so they are doing layer-3. However, we are not doing any dynamic routing protocols so the dynamic route peering over VPC does not apply in this situation.

claytonmeyer

The only reason we're using a /29 rather than a /30 is we need additional IP addresses to accommodate HSRP (VIP) on Nexus 9K's. The 9K's are our core switches so they are doing layer-3. However, we are not doing any dynamic routing protocols so the dynamic route peering over VPC does not apply in this situation.

ede_pfau

Cisco Nexus internally use non-standard Ethertypes - the same which FortiOS uses on the HA links. This is documented in the [strike]KB.[/strike] HA chapter of the 'FortiOS Handbook'.

To avoid trouble you can change the Ethertype in the HA setup ('conf sys ha'). 3 different types are used.

 

Your routing / connectivity problems could be a side-effect of this.


Ede

"Kernel panic: Aiee, killing interrupt handler!"
Ede"Kernel panic: Aiee, killing interrupt handler!"
claytonmeyer

That is interesting. I hadn't heard that before. I'll look into this but I can tell you that our heartbeat ports are directly connected to each other. The two FG's don't use any switching to connect the two for HA. Do you think this could still apply?

claytonmeyer

I'd really like to hear from those out there who are running a HA Cluster. Do you have to have a L2 device between the FG's & your core switching/routing? If so, can anyone explain why that is a requirement?

vulcan603
New Contributor

Did you ever resolve this ?

 

I am having a simliar issue since upgrading to a Nexus 9K over the weekend.

I cant get my FortiADC (load balancers) to establish a LACP connection to the 9k.

 

I have found the following which appears to be a bug in the 9K regarding LACP but I have not confirmed that it is why LACP isnt working.

 

Started here after some googling.

https://www.reddit.com/r/networking/comments/7uu1zh/nxosv_9000_703i72_vpc_fully_working/

 

LACP bug documented here. Seems Cisco to Cisco is fine.

Cisco to non Cisco does not work.

https://learningnetwork.cisco.com/thread/120028

 

This shows some sort of a hack for a known LACP bug in 7.0.3.I7.2

https://techstat.net/cisco-nexus-nx-osv-9000-lacp-vpc-bug-fix/

 

Were you able to resolve your issue.

Sunil_Panchal_NSE7

Dear friend ,

  we are working with same senario as you want to use but we are not using fortigate HA ,

but we are able to acheivethe same you want .

please contact me so we can share the information .

we have dual nexus 9K and fortigate 1500D with Vdom

Layer 3LACP for communicating with Nexus core .

 

please be in touch for any information

 

 

Labels
Top Kudoed Authors