I tried to enable jumbo frames on ax0 today. I used 9000 as MTU, which was accepted (10000 seems to be out of range, which can be verified by ifconfig).
However, when I tried "ping -s 8972 -D xxxxx", the pings never went through. The highest I could manage was 4054, which indicates a real MTU of 4082. I tried two different targets, which among themselves can ping.
Is this a hardware limitation or a kernel/driver bug? If it is a hardware limitation, why does ifconfig not complain when such a big MTU is applied (i.e. why does the axgbe driver not bork)?
Largest Jumbo frame is 9216. Now, I am not sure if you configured the same on both ends. I have no issues doing jumbo.
As I wrote, I tried two counterparts I confirmed working with 9K among themselves, so I can rule those out.
Also, all devices are on the same switch, so I rule out the switch as well.
If you don't have problems with 9K, there is only two things that could be at fault: the DAC cable connecting my OpnSense to the switch or my OpnSense itself.
Matter-of-fact that specific DAC cable is a singleton - I have other DAC cables and 10GbE transceivers for all other devices.
So, I swapped DAC cables and guess what? No change. BTW: By doing this I also swapped ports on the switch, so it cannot be a defective switch port either.
I still can set 9K MTU, but everything beyond 4K gets discarded when I really try. When I ping from my OpnSense, I can even see that OpnSense emits the packets and that they get replied to by the counterpart (using tcpdump on the counterpart). When pinging from the counterpart, I see outgoing packets but no answers. Once the size gets too big, there is nothing to bee seen on OpnSense. Thus this seems to be a problem on OpnSense's receiving end.
4082 bytes is rather close to 4096, which may be one physical memory page, but I am only theorizing here.
Maybe different settings, like RSS or hardware offloading? Are you really sure your 9K MTU works?
how's this interface with Jumbo configured. I mean is it a trunk or routed interface.
if it's a routed interface, are you testing the ping point to point?
We are talking about the LAN interface connected to a switch talking to another directly connected device.
It is a trunk in that there are three VLAN sub-interfaces, or what are you referring to?
BTW: I have an indication that it is a driver limitation on this specific implementation, as sysctl shows:
dev.ax.0.iflib.rxq2.rxq_fl0.buf_size: 4096
dev.ax.0.iflib.rxq1.rxq_fl0.buf_size: 4096
dev.ax.0.iflib.rxq0.rxq_fl0.buf_size: 4096
I was always wondering why there are only 3 RX queues. When you google axgbe, you will see this often:
ax0: <AMD 10 Gigabit Ethernet Driver> mem 0xef7e0000-0xef7fffff,0xef7c0000-0xef7dffff,0xef80e000-0xef80ffff irq 40 at device 0.4 on pci6
ax0: Using 512 TX descriptors and 512 RX descriptors
ax0: Using 4 RX queues 4 TX queues
ax0: Using MSI-X interrupts with 8 vectors
ax0: xgbe_phy_reset: no phydev
On the DEC750, there are only 3 RX queues and I have found no way of changing that. The bufsizes above are read-only as well.
Do you have a DEC750 or something else? I tried disabling hardware offloading to no avail, but not disabling RSS yet. Do you have it enabled?
So that may be... I have an A20 netboard and looks like you have a A10 which makes sense that your max MTU is 4K as opposed to 8K which is still considered Jumbo as anything over 15oo is considered jumbo.
What is your output when you issue 'sysctl -a | fgrep dev.ax.0.iflib' or 'netstat -m'?
Here's the doc for the I210 NIC on A20 spec which states 9.5K size.
https://www.mouser.com/datasheet/2/612/i210_ethernet_controller_datasheet-257785.pdf
On pg 12 Table 1-3
Size of jumbo frames supported 9.5 KB
I'll post it later as I am out.
I was talking about the axgbe driver via SFP+, not the 1 GbE igb. As I said in my opening post:
Quote from: meyergru on July 16, 2022, 12:22:49 PM
I tried to enable jumbo frames on ax0 today.