Connectivity audit using weird packet sizes ...

Started by Patrick M. Hausen, November 23, 2023, 05:37:20 PM

Previous topic - Next topic
IPv4:
Currently running OPNsense 23.7.9 at Thu Nov 23 17:29:33 CET 2023
Checking connectivity for host: pkg.opnsense.org -> 89.149.222.99
PING 89.149.222.99 (89.149.222.99): 1500 data bytes
1508 bytes from 89.149.222.99: icmp_seq=0 ttl=55 time=32.805 ms
1508 bytes from 89.149.222.99: icmp_seq=1 ttl=55 time=32.682 ms
1508 bytes from 89.149.222.99: icmp_seq=2 ttl=55 time=32.782 ms
1508 bytes from 89.149.222.99: icmp_seq=3 ttl=55 time=32.610 ms

--- 89.149.222.99 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss


OK, working - but why oversized packets which will have to be fragmented?

And IPv6:
Checking connectivity for host: pkg.opnsense.org -> 2001:1af8:5300:a010:1::1
PING6(1548=40+8+1500 bytes) 2003:a:d59:3800:3eec:efff:fe00:5433 --> 2001:1af8:5300:a010:1::1

--- 2001:1af8:5300:a010:1::1 ping6 statistics ---
4 packets transmitted, 0 packets received, 100.0% packet loss


No fragmentation in IPv6, so of course:
17:26:53.546530 IP6 opnsense.ettlingen.hausen.com > 2003:a:d59:3800:3eec:efff:fe00:5433: ICMP6, packet too big, mtu 1492, length 1240



This is a 23.7.10 lab installation connected to my LAN as uplink ("block private networks" disabled) and with the regular interface MTU of 1500 bytes. The real Internet firewall has got an MTU of 1492 bytes on WAN (PPPoE).


What's the reasoning behind sending large packets instead of just regular ICMP echo? And is there possibly a bug in the size calculation?


Kind regards,
Patrick
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

November 23, 2023, 07:29:22 PM #1 Last Edit: November 23, 2023, 07:34:57 PM by Maurice
I once thought this was a bug, too, but Franco confirmed it to be intentional:
https://github.com/opnsense/core/commit/cdd35a

I'm still not convinced this is the best solution though:
For IPv4, some routers perform fragmentation, but many don't. So the test may fail without there actually being a problem.
For IPv6, the kernel by default fragments the packets before sending them. But I don't think it performs PMTUD for an echo request, so this might fail if the path MTU doesn't match the local interface MTU.

What happens if you set the lab system's WAN MTU to 1492?

Cheers
Maurice

[edit] The kernel also fragments IPv4 packets before sending them, but if the fragments are still too big to fit the path MTU, an upstream router may fragment them again (which isn't possible with IPv6). [/edit]
OPNsense virtual machine images
OPNsense aarch64 firmware repository

Commercial support & engineering available. PM for details (en / de).

Still why send packets with a total size greater than the local interface MTU? Doesn't make sense.

In a download scenario you will get 1460 bytes of payload with 20 bytes of TCP and 20 bytes of IPv4 header. That will be reduced to 1452 by MSS clamping as probably most people will activate if they are connected via PPPoE. But you will never get a packet over 1500 bytes including all headers ...

If I set the WAN MTU of the lab firewall to 1492, IPv6 audit is successful.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

Mine works okish:

Checking connectivity for host: pkg.opnsense.org -> 2001:1af8:5300:a010:1::1
PING6(1548=40+8+1500 bytes) 2003:cd:873a:4300:f690:eaff:fe00:d30d --> 2001:1af8:5300:a010:1::1
1508 bytes from 2001:1af8:5300:a010:1::1, icmp_seq=1 hlim=55 time=20.862 ms
1508 bytes from 2001:1af8:5300:a010:1::1, icmp_seq=2 hlim=55 time=21.034 ms
1508 bytes from 2001:1af8:5300:a010:1::1, icmp_seq=3 hlim=55 time=21.295 ms

--- 2001:1af8:5300:a010:1::1 ping6 statistics ---
4 packets transmitted, 3 packets received, 25.0% packet loss
round-trip min/avg/max/std-dev = 20.862/21.064/21.295/0.178 ms

But improvements welcome.

Background: seen the weirdest bug cases with full payload not going through in VMware back in the day. It was almost impossible to diagnose except with these oversize pings. Clipping to the MTU can be done, but requires more code for such a simple and mostly irrelevant piece of audit scripting.


Cheers,
Franco

One could argue that a failing oversized ping is an indicator for fragmentation / MTU issues, so it might actually be helpful in rare cases. On the other hand, it causes a lot of false alerts. Patrick and me were obviously stumbling over such a failed audit, independently, wondering what the heck is going on.

Not sure how to proceed, but I'm still in favour of a more reasonable payload size.
OPNsense virtual machine images
OPNsense aarch64 firmware repository

Commercial support & engineering available. PM for details (en / de).

Would be nice if the packets were not oversized, but an option somewhere to set it to oversize for debug purposes.

I use WireGuard to tunnel in IPv6 from a VPS. IPv6 works fine, but the connectivity check ping doesn't, but checking packages over IPv6 works.