Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
mbrowndcm
New Contributor III

Multicast " bridging" and joining multiple groups

Hello, I am interested in bridging multicast traffic across a Fortigate. It appears to be either very simple, or very complicated. I have reviewed the technote on Multicast, and I am unsure if I need to worry about configuring PIM routing. If not, then I believe I would have to configure the join-group on the interface that is on the same VLAN as the multicast traffic/group. Here is a diagram for reference. Has anyone configured multicast " bridging?" Am I missing anything? Thanks, Matt
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
1 Solution
mbrowndcm
New Contributor III

[I guess I needed to take the weekend to sleep on this... I' ve corrected my questions as necessary.] Is there any way that I can configure the interface near the sender to dynamically register as a router for certain multicast groups only when receiving nodes JOIN the groups on the interface near the receivers? It doesn' t seem that I can leverage functionality as an RP for this, correct? Like a JOIN or PRUNE pim packet? Is it standard to just allow multicast to always hit the interface? I' m concerned with maintaining the lowest possible latency between the two subnets, so it is mildly concerning that dynamic JOIN/PRUNE isn' t possible. But it also quite possible I' m missing the point. Any assistance is appreciated. Thanks, Matt
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]

View solution in original post

" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
17 REPLIES 17
emnoc
Esteemed Contributor III

1st cool drawings a lot of other, should follow your example when posting problems. a picture says a thousand words 2nd you could mcast forwarding, but that would forward all mcasts traffic regardless if you need it at each location 192.168.100.x > 192.168.200.x and vice-versa. ( Is that what you really want todo and use extreme caution when doing it ) 3rd ( imho the best ) configure " config router multicast" and defining PIM interfaces would be more better with traffic controls imho. It would also protect you from needless forwarding of other non-mcast traffic and such.

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
mbrowndcm
New Contributor III

Thanks for the reply. I pretty much came down to the following configuration, after reading through the CLI guide. It' s worth noting that the switches where the multicast group is " published/lives" and where the destination hosts live have the following features enabled: snooping + a host membership querier, drop traffic from " unknown groups" (" unknown groups" are defined as multicast groups that the switch has snooped IGMP CREATE for, and have heard at least one IGMP JOIN for within the query response time out threshold [they are both 3COM baseline aka HP v-series for future reference]). This switch config stops multicast packets from being pushed to the nodes that don' t join the created, and known, multicast groups. Considering the above, and the following fortigate config, specifically firewall multicast-policy, and join-group, will actually control the traffic well (have the multicast traffic actually only publish to the nodes that have IGMP JOINed the group). PIM might be only useful if I have multiple switches/routers to send the packets through, in a similar manner to STP. at root VDOM: 1) Configure the system to allow multicast forwarding:
 conf system settings
 set multicast-forward enable
 end
 
2) Configure the multicast-policy
 conf firewall multicast-policy
 edit 1
 set srcintf internal1
 set dstintf internal2
 end
 
3) configure the firewall policy that allows the traffic. [also considering conf global\tp-mc-skip-policy, which will skip firewall policies for multicast traffic.] 4) Configure the multicast group join on internal1:
conf vdom
 edit root
 conf router multicast 
 conf interface
 edit internal1
 conf join-group
 edit address 239.100.100.101
 edit address 239.100.111.99
 edit address 239.100.111.100
 end
 end
It' s unclear how to join-group to multiple groups. Thanks! Matt [edit] I' ll obviously have to test this with the prod multicast infrastructure, but for now, iperf will do the trick!
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
emnoc
Esteemed Contributor III

I' ve always hated static joins within cisco router when working the financial sector and working with 200+ groups at any given time, due to the extra load it places. $-6 groups on a firewall probably would not make a big impact but it is a process that would run at interval X If you do igmp snooping enable and have a PIM/IGMP querier, the modern switches typically know this and respect or trust that multicast router and local active subscriptions. On the IGMP create and joins issues, does your switch automatically forward traffic to a host within the same vlan and locally? and without a queryier ? i.e publisher 192.168.100.101>>>>224.0.1.120:10120 subscriber 192.168.100.23 One other issues that with L3-mcast-interfaces & why they are so much better; let' s say your have joins on the FGT and bridging enable, but no subscriber within the other L3 subnet, the forwarder process will always send that group, regardless of a true subscriber with in the other network subnet. A true multicasts routing would expire the flow upon no active subscription.

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
mbrowndcm
New Contributor III

I' ve always hated static joins within cisco router when working the financial sector and working with 200+ groups at any given time, due to the extra load it places.
I don' t think I' m concerned with this level, nor do I have multiple hops to be concerned with. It' s simply: hostA>data to multicast group>VLAN200 on switchA>Fortigate internal1>Fortigate internal2>VLAN100 on switchA>host B I think the above will work, but I need to test. I did some sniffing with iperf and it seemed to be fine. I was trying to use soni' s win32 port of PackETH, without any luck (it wasn' t writing the IGMP JOIN membership query packets properly).
$-6 groups on a firewall probably would not make a big impact but it is a process that would run at interval X
I really don' t understand what you mean by " $-6 groups" In fact, I don' t understand what you were trying to express in this sentence at all. Are you saying that " there will be latency introduced to the packet arrival time by processing these packets through firewall policies" ? If so, I' m already prepared to troubleshoot the jitter/drift that may occur. Our middleware infrastructure is such that it can take easily measure this (it' s already builtin to the apps, horray!).
If you do igmp snooping enable and have a PIM/IGMP querier, the modern switches typically know this and respect or trust that multicast router and local active subscriptions.
PIM and IGMP querier are the same thing? I don' t think they are, but I could be wrong. Maybe I don' t understand well, but are you saying that the switch is already acting as a multicast router (likely speaking PIM)?
On the IGMP create and joins issues, does your switch automatically forward traffic to a host within the same vlan and locally? and without a querier?
If I did not have the IGMP querier running, the the IGMP snooping doesn' t work. Also, I do have the " drop packets from unknown" (as previously defined). If I had both of these things disabled, the multicast traffic would turn into broadcast traffic and hit every single node on the switch. The devs really didn' t split the traffic up well enough (in my opinion), so the multicast groups are quite chattery to the hosts that are subscribed. But this has little effect.
One other issues that with L3-mcast-interfaces & why they are so much better; let' s say your have joins on the FGT and bridging enable, but no subscriber within the other L3 subnet, the forwarder process will always send that group, regardless of a true subscriber with in the other network subnet.
Yes, since the fortigate interface (internal1 in my case) is subscribed to the multicast group, it will be subscribed to the multicast group and receive packets pushed to this multicast group. You are also saying that the Fortigate will then push the packets through to the other interface (internal2) and try to publish these to the multicast group. But, they won' t have any group members, so my switch will drop the traffic thanks to the " drop packets from unknown" setting. I do see that this will cause undesirable traffic. [:' (]
A true multicasts routing would expire the flow upon no active subscription.
I am not clear on how to implement PIM routing, but would like to, since that last bit is not great. Specifically since the firewall will have to inspect all these packets. The interface internal1 would still have to join the multicast group. Are you saying that that' s not correct? That the PIM router will dynamically join the group when it hears a IGMP JOIN request from a client on internal2? Thanks very much for your input, it' s been invaluable! Thanks, Matt
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
emnoc
Esteemed Contributor III

I really don' t understand what you mean by " $-6 groups" In fact, I don' t understand what you were trying to express in this sentence at all. Are you saying that " there will be latency introduced to the packet arrival time by processing these packets through firewall policies" ? If so, I' m already prepared to troubleshoot the jitter/drift that may occur. Our middleware infrastructure is such that it can take easily measure this (it' s already builtin to the apps, horray!).
that shold have been 5 to 6 groups won' t make that much impact in the cpu, but if you place alot of joins, that sooner or later becomes yet another process that tacks on to the CPU and memory consumption. Since FWs are just that firewalls, and not mainly routers , be advise of the impact this could or could not cause, is all that I' m warning you about
PIM and IGMP querier are the same thing? I don' t think they are, but I could be wrong. Maybe I don' t understand well, but are you saying that the switch is already acting as a multicast router (likely speaking PIM)?
Totally wrong here; IGMP is group membership, PIM is protocol independent multicasts 2 diffferent function, different discovery and dst groups. 224.0.0.1 vrs 224.0.0.13 for example 13.0.0.224.in-addr.arpa domain name pointer pim-routers.mcast.net. 1.0.0.224.in-addr.arpa domain name pointer all-systems.mcast.net. They both deal with multicast but are 2 unique beast and plays 2 unique functions. In some of the bigger networks that I' ve worked, you can enable and disable IGMP and PIM independently of each other. With cisco router is automatic ( you enable PIM ...IGMP query is also enable ) FWIW: Before PIM we used DVMRP or mrouted-OSPF. But typically anything modern since 2000 is PIM awared. IGMP has three version v1 v2 v3, v3 is a unique version that provides both group and sender subscription matching. I would suggest you read up on IGMP and PIM from the RFC point of view. On the latter, igmp-snooping switches, are not multicasts router all of time and, let me repeat; it (IGMP snooping ) deals with IGMP group subscriptions As I type, I have a ton of L2 cisco switches, all are IGMP-snooping awared and enabled, but they are not multicast routers by a long shot.
Yes, since the fortigate interface (internal1 in my case) is subscribed to the multicast group, it will be subscribed to the multicast group and receive packets pushed to this multicast group. You are also saying that the Fortigate will then push the packets through to the other interface (internal2) and try to publish these to the multicast group. But, they won' t have any group members, so my switch will drop the traffic thanks to the " drop packets from unknown" setting. I do see that this will cause undesirable traffi
Thanks for that clarification and that was what I was going towards
The interface internal1 would still have to join the multicast group. Are you saying that that' s not correct? That the PIM router will dynamically join the group when it hears a IGMP JOIN request from a client on internal2?
kinda correct, 1st off PIM is not a IGPM querier process or function, 2nd when you enable PIM on that interface, you get IGMP queier by default and don' t need the static-joins that you are doing. So the L3 interface would query for any active subscriptions, and if any are heard and a valid group is present, than it would AUTOMATICALLY forward the traffic. every thing else that you are planning todo & to control 5-6 group , could be eliminated by just enable multicats routing between 2 subnets and maybe one fwpolicy & a few fwpolicy-address entries. Since the subnets are local to the fwappliance, the RPF-checks will pass and as long as a fwpolicy is present then ...bingo you have sender and receivers awared and receiving the groups. I really think your creating alot of unwarrant work for nothing and over thinking this imho

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
mbrowndcm
New Contributor III

emnoc, Thanks again from your response. I' m really lost as to how to configure what you' re describing, specifically in my situation (when my client hosts don' t speak sparse mode PIM). For instance, if I had multiple multicast routing hops, and I wished to only have traffic relay to other multicast routers, I would configure PIM enabled sparse mode routers. However, I really do not understand how, with this single hop, I can configure the two interfaces to perform this same situation. Are you saying that I should create a rendezvous point for each of the interfaces with and partner them with each other?
config router multicast
 set multicast-routing enable
 config interface
 edit port1
 set pim-mode sparse-mode
 set rp-candidate enable
 set rp-candidate-group multicast_port1
 set rp-candidate-priority 15
 end
 end
...where multicast_port1 contains the address list of each of the multicast groups I would like to route? The concept you' ve described sounds appealing, internal1 will only JOIN the multicast group when a host off internal2 JOINs the group, but from what I read through the multicast tech note, it does not seem easy or even possible (without an RP configured). Am I missing something simple? Thanks, Matt
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
emnoc
Esteemed Contributor III

You don' t need a BSR RP or need to be a candiate in your ONE router topology. here' s what I would do; config router multicast set multicast-routing enable set route-limit 20 end # ! OPTIONAL SET A LIMIT SO YOUR MCAST ROUTES DON' T HARM YOUR RESOURCES ADJUST IF REQUIRED config system settings set multicast-forward enable end # config firewall multicast-policy edit 10 set srcaddr 192.168.200.101 255.255.255.255 set srcintf vlan200 set destaddr 239.100.200.99 255.255.255.0 set dstintf vlan100 next edit 11 set srcaddr 192.168.200.101 255.255.255.255 set srcintf vlan200 set destaddr 239.100.200.100 255.255.255.0 set dstintf vlan100 next edit 12 set srcaddr 192.168.200.101 255.255.255.255 set srcintf vlan200 set destaddr 239.100.200.101 255.255.255.0 set dstintf vlan100 next edit 13 set srcaddr 192.168.200.101 255.255.255.255 set srcintf vlan200 set destaddr 239.100.111.99 255.255.255.0 set dstintf vlan100 next edit 14 set srcaddr 192.168.200.101 255.255.255.255 set srcintf vlan200 set destaddr 239.100.111.100 255.255.255.0 set dstintf vlan100 next ! you might be able to set a fwpolicy-address and list all of these within that address and one fwpolicy /* next do your interfaces for PIM */ config interface edit vlan100-interface set dr-priority 1 set hello-interval 65323 set pim-mode sparse ! optionally set the interface TO NOT used pim hellos , this helps with resources set passive enable edit vlan200-interface set dr-priority 1 set hello-interval 65323 set pim-mode sparse ! optionally set the interface TO NOT used pim hellos, this helps with resources set passive enable end my personnel note & opinions * You can operate pim-sparse or dense if you so desire. * your client ( sender/receivers ) don' t not speak pim-sparse at all. I just want you top be clear. * i would set the DR priority to be low if you are NOT doing passive mode and if some one else installs a router on to your mcast domain * the reason I would set pim passive mode or long pim-hellos intervals is due to CPU/MEM resources. Hope the above cfg helps.

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
mbrowndcm
New Contributor III

Thanks very much emnoc! I' ll look into testing parts of your config as I work on implementation tonight. The last question I have is: If my switch is set up not to send traffic to any node that it doesn' t receive a IGMP JOIN from, how is the interface going to see any of the multicast packets? Do I need to reconfigure the switch in some way? How do I set the join-group to multiple addresses?
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
emnoc
Esteemed Contributor III

The switch support igmp-snooping, So it see the mcast routers via it' s query interface. It now know who' s the multicast router(s) and what is not, and on what port(s) and what vlan(s). So it forwards the client IGMP join to the appropiate IGMP speaker ( router L3 interface or in your case the firewall vlan100/200 L3 interfaces ). clear ? Not sure of what model switch(s) you have but on cisco or even foundry you can get an ideal of what port the switch see a mcast router on via the following command; #show ip igmp snoop mrouter I' mm pretty sure other switch vendors have done the same or have a similar command ( Extreme, Force10, AristaNetwork, HP, possible Dell,etc....) Unrelated, but the newer generation of switches also support what' s called ip pim snooping and the same thing, they snoop in on PIM-hellos and determine PIM-speakers. This should not be an issues in your one router topology and unrequired but I thought I would point that out.

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
Labels
Top Kudoed Authors