Helpful ReplyHot!New FG1000D HA Deployment Using Nexus VPC Troubleshooting

Author
claytonmeyer
New Member
  • Total Posts : 5
  • Scores: 0
  • Reward points: 0
  • Joined: 2018/06/06 12:58:32
  • Status: offline
2018/06/06 14:59:14 (permalink)
0

New FG1000D HA Deployment Using Nexus VPC Troubleshooting

Hi all, I'm deploying a new pair of 1000D in active/active configuration. Outside interface's are connecting to a single ISP & seem to be working fine. The issue I'm having is communication between our core Nexus 9K's & the 1000D's. This is a multi-tenant environment and therefore we are leveraging VDOM's on the FG & VRF's on the 9K's. 
 
I'm using individual /29 networks between the FG & 9K's to route. We are also using LACP on the FG & VPC on the pair of 9K's. The 9K's are not new & have been in production for quite some time. They are configured according to Cisco best practices. 
 
The issue we ran into was that we were unable to communicate properly between the core switches & the firewalls on one of the /29 networks. After working with Fortinet Support for hours, I was told that best practices was for a L2 switch to be physically installed between the firewalls and our core switches. This apparently is due to some requirement for HA configuration. Even though this wasn't clear to me, we did try this method and it didn't resolve the issue.
 
The physical ports on the FG are configured like this. 802.3ad port is in the root VDOM with no L3 config. The underlying VLAN interfaces are in a unique VDOM. These VLAN interfaces are what use the /29 network to route to the Nexus 9K's. Is there something about this design that doesn't seem right? Any ideas what might be causing this problem? Running 5.6.4 code.
 
 

Attached Image(s)

#1
Toshi Esumi
Expert Member
  • Total Posts : 1259
  • Scores: 89
  • Reward points: 0
  • Joined: 2014/11/06 09:56:42
  • Status: offline
Re: New FG1000D HA Deployment Using Nexus VPC Troubleshooting 2018/06/07 11:10:43 (permalink)
0
To me it should work. But for our similar case, we use a /30 subnet on the vlan to connect a vdom to each customer' VRF because it's a-p setup. We don't have any Nexus SW in the mix though.
#2
emnoc
Expert Member
  • Total Posts : 5082
  • Scores: 311
  • Reward points: 0
  • Joined: 2008/03/20 13:30:33
  • Location: AUSTIN TX AREA
  • Status: offline
Re: New FG1000D HA Deployment Using Nexus VPC Troubleshooting 2018/06/07 11:48:33 (permalink)
0
That should be doable, the /29or /30 is not revelent. Since you mention NXOS are they acting liek l3 routers? if so per cisco this will not work ideally for dynamic routing protocols
 
read this and  review if anything here is in your  design.
 
http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
 
 

PCNSE,  NSE , Forcepoint ,  StrongSwan Specialist
#3
claytonmeyer
New Member
  • Total Posts : 5
  • Scores: 0
  • Reward points: 0
  • Joined: 2018/06/06 12:58:32
  • Status: offline
Re: New FG1000D HA Deployment Using Nexus VPC Troubleshooting 2018/06/07 11:51:04 (permalink)
0
The only reason we're using a /29 rather than a /30 is we need additional IP addresses to accommodate HSRP (VIP) on Nexus 9K's. The 9K's are our core switches so they are doing layer-3. However, we are not doing any dynamic routing protocols so the dynamic route peering over VPC does not apply in this situation.
#4
ede_pfau
Expert Member
  • Total Posts : 5751
  • Scores: 397
  • Reward points: 0
  • Joined: 2004/03/09 01:20:18
  • Location: Heidelberg, Germany
  • Status: offline
Re: New FG1000D HA Deployment Using Nexus VPC Troubleshooting 2018/06/07 13:17:37 (permalink) ☄ Helpfulby claytonmeyer 2018/06/07 13:30:30
0
Cisco Nexus internally use non-standard Ethertypes - the same which FortiOS uses on the HA links. This is documented in the KB. HA chapter of the 'FortiOS Handbook'.
To avoid trouble you can change the Ethertype in the HA setup ('conf sys ha'). 3 different types are used.
 
Your routing / connectivity problems could be a side-effect of this.
post edited by ede_pfau - 2018/06/07 13:48:05

Ede

" Kernel panic: Aiee, killing interrupt handler!"
#5
claytonmeyer
New Member
  • Total Posts : 5
  • Scores: 0
  • Reward points: 0
  • Joined: 2018/06/06 12:58:32
  • Status: offline
Re: New FG1000D HA Deployment Using Nexus VPC Troubleshooting 2018/06/07 13:31:40 (permalink)
0
That is interesting. I hadn't heard that before. I'll look into this but I can tell you that our heartbeat ports are directly connected to each other. The two FG's don't use any switching to connect the two for HA. Do you think this could still apply?
#6
claytonmeyer
New Member
  • Total Posts : 5
  • Scores: 0
  • Reward points: 0
  • Joined: 2018/06/06 12:58:32
  • Status: offline
Re: New FG1000D HA Deployment Using Nexus VPC Troubleshooting 2018/06/08 08:27:00 (permalink)
0
I'd really like to hear from those out there who are running a HA Cluster. Do you have to have a L2 device between the FG's & your core switching/routing? If so, can anyone explain why that is a requirement?
#7
vulcan603
New Member
  • Total Posts : 6
  • Scores: 0
  • Reward points: 0
  • Joined: 2018/03/28 04:22:17
  • Status: offline
Re: New FG1000D HA Deployment Using Nexus VPC Troubleshooting 2018/12/05 08:32:34 (permalink)
0
Did you ever resolve this ?
 
I am having a simliar issue since upgrading to a Nexus 9K over the weekend.
I cant get my FortiADC (load balancers) to establish a LACP connection to the 9k.
 
I have found the following which appears to be a bug in the 9K regarding LACP but I have not confirmed that it is why LACP isnt working.
 
Started here after some googling.
https://www.reddit.com/r/networking/comments/7uu1zh/nxosv_9000_703i72_vpc_fully_working/
 
LACP bug documented here. Seems Cisco to Cisco is fine.
Cisco to non Cisco does not work.
https://learningnetwork.cisco.com/thread/120028
 
This shows some sort of a hack for a known LACP bug in 7.0.3.I7.2
https://techstat.net/cisco-nexus-nx-osv-9000-lacp-vpc-bug-fix/
 
Were you able to resolve your issue.
#8
Jump to:
© 2018 APG vNext Commercial Version 5.5