Hot!Can't Ping or RDP Azure VM using site to site connection

Author
Olakunzo
New Member
  • Total Posts : 1
  • Scores: 0
  • Reward points: 0
  • Joined: 2019/06/25 12:06:12
  • Status: offline
2019/06/25 12:10:26 (permalink)
0

Can't Ping or RDP Azure VM using site to site connection

I have free Azure subscription which I created a Network and connected it to a virtual machine. I followed the procedure for creating a site - to site connection on Azure portal and when completed, I could see that the network connected and data flowing in and out of both the Azure VPN and my local device VPN. However, when I want to ping either of my windows server 2019 or 2016 on either the VM or On-premise server, I get a timed out message. My local VPN device is a Fortigate 101E v5.6.9. Is there something I am missing?
#1
Grave_Rose
Bronze Member
  • Total Posts : 25
  • Scores: 4
  • Reward points: 0
  • Joined: 2017/08/11 10:54:59
  • Status: offline
Re: Can't Ping or RDP Azure VM using site to site connection 2019/06/28 06:58:47 (permalink)
0
Hey Olakunzo,
 
I've done a little work in Azure with FortiGates and VPNs. The Azure networking stack, in my opinion is a mess to understand sometimes. :) That being said, what I would do in your situation is verify what's happening at the packet level. Here are the assumptions for your network:
- On prem server: 10.20.30.40/24
- FortiGate internal IP: 10.20.30.1/24 (lan interface)
- FortiGate VPN interface name: vpn-to-azure
- Azure 2019 server: 192.168.1.1/24
  1. Start on your local on-prem server. If it's tcpdump you're using or Wireshark, both take BPFs so use the following filter to capture: '(host 10.20.30.40 and host 192.168.1.1 and icmp) or arp'
  2. Make sure that the packet is leaving 10.20.30.40 with a destination MAC of your FortiGate's 'lan' interface.
  3. Move to the FortiGate and make sure the packet arrives with: diagnose sniffer packet lan 'host 10.20.30.40 and host 192.168.1.1 and icmp' 6 10
  4. Make sure it leaves the VPN interface with: diagnose sniffer packet vpn-to-azure 'host 10.20.30.40 and host 192.168.1.1 and icmp' 6 10
  5. If nothing shows up from step 4 then the firewall is either dropping the packet or routing it out the wrong interface. Check your logs and/or run debugs (let me know if you want debug commands). Stop here or move to step 6 if it does show up.
  6. If the packet shows up from step 4, log into your 2019 server in Azure and make sure you have Wireshark installed. Capture with the same filter as step 1.
  7. If nothing shows up on your server, investigate the logs of your Azure container. Stop here or move to step 8 if it does show up.
  8. The packet (ICMP echo-request) has now arrived at it's destination. Run Wireshark again with the same filter from step 1 and look for the "ICMP echo-reply" packet.
  9. If there is no ICMP echo-reply packet, your 2019 server may be dropping it (firewall?) or routing it out a different gateway or interface. Stop here or continue to step 10 if there is a reply.
  10. Run your packet captures in reverse order (vpn-to-azure, lan, on-prem server) looking for the ICMP echo-reply packet.
Wherever you find a device where the packet doesn't arrive, then the previous hop is the issue. If you find a device where the packet doesn't leave or doesn't leave correctly, stop and investigate that device. When I say "correctly" it could be malformed, have the wrong ICMP code, be NAT'd or going out the wrong interface (for example). If you follow the packet, you'll find where you need to start looking.
 
Hope this helps,
 
Sean (Gr@ve_Rose)
#2
Jump to:
© 2019 APG vNext Commercial Version 5.5