Hi,
I am trying to set up OPNsense on a X10SLH-N6-ST031 board which has three Intel X540 chips with 6 10GBase-T ports (behind a PCI-E switch).
Unfortunately I was only able to get about 1.5-2Gbps with iperf3 between a LAN device and the server itself.
I checked with Linux and was getting >9Gbps so the hardware should be fine.
Are there known caveats with this intel chips/board? I tried enabling some offloading options but it didn't help.
Thanks for any Help
I checked with pfSense and I am getting 5.5Gbps (iperf3 -s on pfsense, with -P2 it reaches >9Gbps) and > 9Gbps in reverse direction.
I tried freebsd 13.0 (live cd) and hat >9Gbps using iperf3 in both directions.
What are the key differences between OPNsense(HardenedBSD) and FreeBSD that affect networking?
One thing I noticed is that on FreeBSD the NIC shows as "Intel(R) PRO/10GbE PCI-Express Network Driver" but as "Intel(R) X540-AT2" on OPNsense (dev.ix.1.%desc). Does the driver differ from upstream?
I tried comparing sysctl -A tunables but nothing caught my eyes and also I don't really know what to look for. I also lack the knowledge of FreeBSD internals so I don't know what to profile with pmc either.
QuoteWhat are the key differences between OPNsense(HardenedBSD) and FreeBSD that affect networking?
I don't have the answers but this specific question no longer matters because, as OPNsense 22.1, FreeBSD 13 is now used and hardenedBSD is no longer the base. The FreeBSD 13 based kernel OPNsense 22.1 uses and the native Freebsd 13 kernel seem to perform differently though. Check out the recent thread below: https://forum.opnsense.org/index.php?topic=27828.msg135311#msg135311
Hopefully, @Franco can reconcile those differences for us next week.
Mh, I didn't see any packet loss though.
Can't say if its either custom OPNsense kernel/driver patches, a freebsd tunable or a kernel tunable...
FWIW, I am not experiencing the packet loss described in that other post anymore, it's just the performance delta that remains unmitigated and unexplained. The reason the discard/CRC issues aren't problematic atm is unknown. Even restoring everything back to original hasn't reproduced that symptom. It's possible that it's a failure mode the system can get into when it's falling behind enough in performance and FC isn't present to bring order forcibly. That would be a pretty low level driver/NIC issue but as shown in the other post, we are pretty far down the rabbit hole.
Thanks for the recap.
Maybe it's possible to bisect the differences between freebsd and opnsense and find the change thats responsible.
Indeed, I'm attempting to at least partially attempt that atm by using the opnsense kernel on a freebsd-13.0 VM and comparing the various sysctl settings to see if I can further isolate the issue. I'm tracking my results over in the other thread, so check there for updates.
Quote from: joellinn on April 10, 2022, 01:54:18 PM
I tried freebsd 13.0 (live cd) and hat >9Gbps using iperf3 in both directions.
What are the key differences between OPNsense(HardenedBSD) and FreeBSD that affect networking?
One thing I noticed is that on FreeBSD the NIC shows as "Intel(R) PRO/10GbE PCI-Express Network Driver" but as "Intel(R) X540-AT2" on OPNsense (dev.ix.1.%desc). Does the driver differ from upstream?
I tried comparing sysctl -A tunables but nothing caught my eyes and also I don't really know what to look for. I also lack the knowledge of FreeBSD internals so I don't know what to profile with pmc either.
What happens if you disable hardware offloading when booted into the FreeBSD 13.0 LiveCD (e.g. ifconfig <interface> -rxcsum -txcsum -tso -lro -txcsum6 -vlanhwtag -vlanhwtso) ? Do you see line speed or a drop?
If a drop, compare to what options OPNsense is reporting it is using.
Also, watch the kernel threads with top -SHiz and see if you are pegging out a single core executing an intr kthread when you get poor results vs when you get line speed.
Sorry to comment on an old thread but I was wondering if this ever got resolved?
I've seen this board on eBay and am considering getting one...
Thanks.