Hi,
I'm running OPNsense 25.1.5_5-amd64 on a typical PPPoE over VLAN internet connection that supports mini-jumbo packets. To enable this I set an MTU of 1508 on the PPPoE connection (showing calculated 1500).
This seems to work, pinging works:
$ ping 1.1.1.1 -c 10 -M do -s 1472
PING 1.1.1.1 (1.1.1.1) 1472(1500) bytes of data.
1480 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=18.9 ms
1480 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=23.8 ms
However when checking BGP.Tools or SpeedGuide.net I get an TCP MSS or 1452 for IPv4. How can I fix this?
I used this guide here https://forum.opnsense.org/index.php?topic=21207.0 (https://forum.opnsense.org/index.php?topic=21207.0)
That worked for me.
Take a look at this (https://forum.opnsense.org/index.php?topic=45658.0) on how to correctly set the WAN MTU. It is a little more complicated than just setting the PPPoE interface MTU to 1508.
That being said, your ping works from a Linux client (guessing from the ping syntax), have you tried from OpnSense itself? Maybe the ping only works because OpnSense fragments the packet via MSS clamping, but I am just guessing here.
Thank you for your replies.
ping also works on the router itself:
root@router:~ # ping -D -s1472 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 1472 data bytes
1480 bytes from 1.1.1.1: icmp_seq=0 ttl=60 time=8.115 ms
1480 bytes from 1.1.1.1: icmp_seq=1 ttl=60 time=8.364 ms
Also the MTU's are correct in ipconfig (igc0=1512, igc0_vlan6=1508, PPPoE=1500). So this seems to be only a MSS clamping issue.
Have you tried pinging www.speedguide.net with a packet size of 1472? As strange as it may seem, you setup of MTU may be correct, but if the peering of your ISP to those specific sites have a limited path MTU, you will be limited by that (i.e. bad peering).
As it seems, 1.1.1.1 has better peering for you, but that is no surprise, because they have direct peering with many ISPs, which you can see when you do a traceroute. For me, this looks like:
# traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 64 hops max, 40 byte packets
1 ac7.muc1.m-online.net (82.135.16.18) 1.593 ms 23.394 ms 26.484 ms
2 ae5.r4.muc6.m-online.net (82.135.16.128) 1.555 ms 1.488 ms 1.465 ms
3 host-62-245-213-3.customer.m-online.net (62.245.213.3) 2.188 ms 2.039 ms 1.518 ms
4 one.one.one.one (1.1.1.1) 1.673 ms 1.914 ms 1.892 ms
As you can see, there is direct peering for 1.1.1.1, whereas www.speedguide.net takes a lot more hops:
# traceroute www.speedguide.net
traceroute to www.speedguide.net (68.67.73.20), 64 hops max, 40 byte packets
1 ac7.muc1.m-online.net (82.135.16.18) 1.771 ms 1.521 ms 1.473 ms
2 ae1.rt-inxs-7.m-online.net (82.135.16.197) 1.511 ms 1.709 ms 1.587 ms
3 ipv4.decix-munich.core1.muc1.he.net (185.1.208.30) 2.623 ms 2.937 ms 2.897 ms
4 as6939.frankfurt.megaport.com (62.69.146.18) 8.324 ms 8.670 ms 8.581 ms
5 ipv4.decix-frankfurt.core1.fra1.he.net (80.81.192.172) 20.435 ms 55.434 ms 23.862 ms
6 * * *
7 port-channel6.core1.par3.he.net (184.104.196.231) 19.517 ms 19.683 ms 19.095 ms
8 port-channel4.core2.orf2.he.net (184.104.188.213) 90.550 ms 90.821 ms 90.874 ms
9 * * *
10 port-channel1.core2.jax1.he.net (184.104.198.70) 125.078 ms 113.244 ms 110.615 ms
11 gorack421-lc.10gigabitethernet3-5.core1.jax1.he.net (216.66.64.146) 112.199 ms 109.677 ms 112.081 ms
12 te-4-1-1132-40g-west.core-b.jcvnflcq.jax.as19844.net (216.238.150.205) 128.878 ms 118.123 ms
ge-0-0-6-1125.rr-a.jcvlfljb.jax.as19844.net (216.238.150.217) 118.933 ms
13 xe-0-1-2-1131.scolo-c10.jcvlfljb.jax.as19844.net (216.238.150.131) 112.475 ms 114.676 ms 117.294 ms
14 speedguide.net (68.67.73.20) 109.287 ms !Z 111.445 ms !Z 110.962 ms !Z
You can find out where the packet size is limited by pinging every step on your specific route.
If that is the problem, you also know that PMTUD works for you... ;-)
I can ping www.speedguide.net with 1500 bytes too:
% ping -D -s1472 www.speedguide.net
PING www.speedguide.net (68.67.73.20): 1472 data bytes
1480 bytes from 68.67.73.20: icmp_seq=0 ttl=51 time=107.106 ms
1480 bytes from 68.67.73.20: icmp_seq=1 ttl=51 time=107.499 ms
1480 bytes from 68.67.73.20: icmp_seq=2 ttl=51 time=107.253 ms
1480 bytes from 68.67.73.20: icmp_seq=3 ttl=51 time=107.499 ms
1480 bytes from 68.67.73.20: icmp_seq=4 ttl=51 time=107.270 ms
1480 bytes from 68.67.73.20: icmp_seq=5 ttl=51 time=107.550 ms
For some reason opnsense is TCP MSS clamping while there's no need...
Well, for me, it does not. When you use https://www.speedguide.net, you obviously use a client. So what you are measuring is the MTU to that.
Apart from potentially limiting the maximum MTU, all of the TCP handling, including finding the PMTU and setting window sizes is handled by the client, not OpnSense. Thus, if you ever fiddled with your client's settings, that may be the reason. There are lots of TCP "optimizing" guides and tools out for Windows that modify the TCP stack's behaviour. This includes instructions on speedguide.net itself. More often than not, such instructions are outdated and do not apply to modern Windows releases any more.
When you talk about MSS clamping, of course, when you enabled that under Firewall: Settings: Normalization, it will kick in.