Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - AWBbox

#1
Netmap appears to be handling jumbo frames just fine on a vanilla install of FreeBSD 14.2. I performed the following:

- Install a vanilla install of FreeBSD 14.2
- Enable jumbo frames by editing /etc/rc.conf
ifconfig_vmx0="inet 192.168.18.50 netmask 255.255.255.0 mtu 9000"
/etc/rc.d/netif restart
- Increase Netmap buffer size:
sysctl dev.netmap.buf_size=4096- Install netmap and pkt-gen:
pkg install netmap pkt-gen- Run an receive capture on the specified interface:
pkt-gen -i vmx0 -f rx
I am able to capture jumbo frame ICMP packets at maximum size without fragmenting. The thing I'm struggling to understand is why Zenmap cannot support jumbo frames when they advised it was due to Netmap being incapable, yet the underlying implementation of Netmap appears to handle them just fine?
#2
Zenarmor (Sensei) / Netmap question, jumbo frame support
February 09, 2025, 02:26:21 PM
I tried installing Zenarmor on OPNsense 25.1 only to find that it would not work due to lack of support for jumbo frames.

I understand that this is specifically a limitation with Netmap, and I am trying to determine exactly what version of Netmap Zenarmor is using in order to test this further with a vanilla installation of FreeBSD 14.2.

Zenarmor's documentation for Linux points to an active GitHub project which states that FreeBSD already includes netmap kernel support by default since version 11, and provides no means to build the package from source. The Netmap package in FreeBSD appears to be something altogether different.

I am pretty sure I'm getting my wires crossed here and would appreciate some clarification. What version of Netmap is Zenarmor using?
#3
I wanted to provide an update to this in case anyone else looks for this solution in future. A kind soul on Reddit provided the answer in the form of jumbo frames!

Increased MTU size to 9000 across my topology and the attached screenshot results. This traffic is being routed and firewalled between the two subnets shown, making me quite happy :)
#4
Hi everyone,

I'm fairly new to OPNSense and Zenarmor. When setting up Zenarmor for the first time and selecting interfaces in Routed Mode with native netmap driver I receive the attached warning message. I've read the documentation from the links it provides but I still don't understand what exactly makes my deployment incompatible.

Some details about my setup:

- OPNsense is version 23.1.11 and Zenarmor version is 1.13.2
- The interfaces are VLAN subinterfaces on a lagg interface which also has receive side scaling enabled
- Hardware CRC, TSO, LRO and VLAN filtering are all disabled
- hw.ixl.enable_head_writeback is disabled
- I am running an Intel XL710-AM1 which uses the ixl driver in FreeBSD. I am not using the driver that comes with OPNsense, but the latest one from Intel instead:


[admin@lonrtr01 ~]$ sysctl -a | grep -E 'dev.(ix).*.%desc:'
dev.ixl.3.%desc: Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.12.40
dev.ixl.2.%desc: Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.12.40
dev.ixl.1.%desc: Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.12.40
dev.ixl.0.%desc: Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.12.40


I had thought that Zenarmor supported the ixl driver natively now and I wouldn't need to install a different kernel on OPNsense for example. Am I mistaken? Or is it something else I've done?

I would appreciate it if anyone could help improve my understanding of the situation. Thanks!
#5
Thanks Sy, that's reassuring to know I'm being limited by Zenarmor running on a single core.

I'm running an Intel i9-12900 which is hardly being taxed at the moment, I imagine multicore will go a long way to leveraging more of its processing power and getting back to the higher throughput I was enjoying before.
#6
Hi everyone,

I'm new to OPNsense and wanted to try implementing some Layer 7 inspection features in the form of Zenarmor. I'm just having a play with the free version for now before committing any money to unlock more features.

One thing I've noticed is that it significantly reduces throughput, in my case from over 16Gbps down to just 5Gbps. I have four VLANs on the inside of my network and Zenarmor is enabled on all of them.

My question is, is it possible to apply Zenarmor processing to North South traffic i.e. traffic from each subnet to the internet, but exempt East West traffic i.e. traffic between the internal subnets themselves?

I noted that this part of the Zenarmor guide refers to a section to exempt subnets and VLANs, but it appears to apply to all traffic so that would not be suitable. This section of the GUI doesn't even appear for me anyway, maybe it's a paid feature? https://www.zenarmor.com/docs/opnsense/configuring/general#exempting-vlans--networks

If anyone has suggestions I would be keen to hear them, thanks!
#7
I'm just extra paranoid about exposing such services to the internet!
#8
Thanks Maurice, that would make a lot of sense. The loopback idea is a good one as a workaround too, thanks.
#9
Hi everyone,

I'm experiencing a weird problem with Wireguard (os-wireguard-go plugin v1.13_5) on OPNsense 23.1.11.

I have a Wireguard endpoint client tunnelling all traffic through to my OPNsense appliance. The firewall rule for the Wireguard interface is wide open, permitting all traffic.

I want to be able to access the administrative interface of OPNsense via HTTPS and SSH on the Wireguard interface IP and so I have included the interface as a listener via System > Settings > Administration > Web GUI + Secure Shell.

If I reboot the OPNsense appliance then I can no longer access administrative interfaces via the Wireguard interface IP. However, if I remove the Wireguard interface as a listener via System > Settings > Administration > Web GUI + Secure Shell > Save > Apply, and then re-add the interface in the same way, it starts working again!

This feels like a bug and I want to make sure I'm not going crazy. If anyone else using Wireguard could test this and see if they are able to replicate, or can point out what I'm doing wrong, that would be great. Thanks!
#10
Thank you for your response, however Receive Side Scaling is already enabled as per net.inet.rss.enabled="1". This has been tested and verified on my part, OPNsense was only sending data on one of the two links prior to this, resulting in only 9.5Gbps transfer speeds.
#11
Hi everyone,

I've been a pfSense user for many years and fancied giving OPNsense a try for comparison. I'm having some throughput issues with LACP. As the attached diagram shows, I have a 4x10Gbps LACP LAG between my ESXi hypervisor and the switch, and a 2x10Gbps LACP LAG between the switch and the OPNsense firewall.

I have multiple VLANs and subnets in this topology, with the firewall being the gateway on which they all reside. iperf3 results between hosts on the hypervisor on the same subnet are ridiculously fast because the traffic doesn't leave the physical network interface, only staying within the DSwitch. iperf tests between subnets however will inevitably have to be routed via the firewall and this is where I am having problems.

Throughput is capping out at around 16Gbps with OPNsense CPU usage at only 30% and I would like to see it nearer 19Gbps. I have read guides such as https://calomel.org/freebsd_network_tuning.html around performance tuning, disabled hardware CRC, TSO, LRO, and applied many tunable variables which do not appear to have made any impact:


hw.ibrs_disable="1"
if_ixl_updated_load="1"
kern.ipc.maxsockbuf="16777216"
net.inet.ip.maxfragpackets="0"
net.inet.ip.maxfragsperpacket="0"
net.inet.rss.enabled="1"
net.inet.tcp.abc_l_var="44"
net.inet.tcp.cc.abe="1"
net.inet.tcp.initcwnd_segments="44"
net.inet.tcp.isn_reseed_interval="4500"
net.inet.tcp.minmss="536"
net.inet.tcp.mssdflt="1460"
net.inet.tcp.recvbuf_max="4194304"
net.inet.tcp.recvspace="65536"
net.inet.tcp.rfc6675_pipe="1"
net.inet.tcp.sendbuf_inc="65536"
net.inet.tcp.sendbuf_max="4194304"
net.inet.tcp.sendspace="65536"
net.inet.tcp.soreceive_stream="1"
net.inet.tcp.syncache.rexmtlimit="0"
net.inet.tcp.syncookies="0"
net.inet.tcp.tso="0"
net.inet6.ip6.maxfragpackets="0"
net.inet6.ip6.maxfrags="0"
net.isr.bindthreads="1"
net.isr.defaultqlimit="8192"
net.isr.dispatch="deferred"
net.isr.maxthreads="-1"
net.link.lagg.default_use_flowid="1"
net.pf.source_nodes_hashsize="1048576"


I am using an Intel XL710 card (Supermicro AOC-STG-i4S) in OPNsense, using the latest firmware and latest Intel drivers https://www.freshports.org/net/intel-ixl-kmod/ as opposed to the ones that come with the OS out of the box. Both NICs in the LAG appear as follows:


ixl2: <Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.12.40> mem 0x60e0800000-0x60e0ffffff,0x60e2808000-0x60e280ffff at device 0.2 on pci1
ixl2: using 1024 tx descriptors and 1024 rx descriptors
ixl2: fw 9.20.71847 api 1.15 nvm 9.00 etid 8000d2ab oem 1.268.0
ixl2: PF-ID[2]: VFs 32, MSI-X 129, VF MSI-X 5, QPs 384, I2C
ixl2: Using MSI-X interrupts with 9 vectors
ixl2: Allocating 8 queues for PF LAN VSI; 8 queues active
ixl2: Ethernet address: ac:1f:6b:8d:08:ae
ixl2: PCI Express Bus: Speed 8.0GT/s Width x8
ixl2: SR-IOV ready
ixl2: The device is not iWARP enabled
ixl2: Link is up, 10 Gbps Full Duplex, Requested FEC: None, Negotiated FEC: None, Autoneg: False, Flow Control: None
ixl2: link state changed to UP
ixl2: TSO4 requires txcsum, disabling both...
ixl2: TSO6 requires txcsum6, disabling both...
ixl3: <Intel(R) Ethernet Connection 700 Series PF Driver, Version - 1.12.40> mem 0x60e0000000-0x60e07fffff,0x60e2800000-0x60e2807fff at device 0.3 on pci1
ixl3: using 1024 tx descriptors and 1024 rx descriptors
ixl3: fw 9.20.71847 api 1.15 nvm 9.00 etid 8000d2ab oem 1.268.0
ixl3: PF-ID[3]: VFs 32, MSI-X 129, VF MSI-X 5, QPs 384, I2C
ixl3: Using MSI-X interrupts with 9 vectors
ixl3: Allocating 8 queues for PF LAN VSI; 8 queues active
ixl3: Ethernet address: ac:1f:6b:8d:08:af
ixl3: PCI Express Bus: Speed 8.0GT/s Width x8
ixl3: SR-IOV ready
ixl3: The device is not iWARP enabled
ixl3: Link is up, 10 Gbps Full Duplex, Requested FEC: None, Negotiated FEC: None, Autoneg: False, Flow Control: None
ixl3: link state changed to UP
ixl3: TSO4 requires txcsum, disabling both...
ixl3: TSO6 requires txcsum6, disabling both...

[admin@lonrtr01 ~]$ ifconfig -vvvv lagg0
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=600000a8<VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,VXLAN_HWCSUM,VXLAN_HWTSO>
        ether ac:1f:6b:8d:08:ae
        laggproto lacp lagghash l4
        lagg options:
                flags=15<USE_FLOWID,USE_NUMA,LACP_STRICT>
                flowid_shift: 16
        lagg statistics:
                active ports: 2
                flapping: 0
        lag id: [(8000,AC-1F-6B-8D-08-AE,0152,0000,0000),
                 (8000,F0-9F-C2-0C-85-F8,001A,0000,0000)]
        laggport: ixl2 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> state=3d<ACTIVITY,AGGREGATION,SYNC,COLLECTING,DISTRIBUTING>
                [(8000,AC-1F-6B-8D-08-AE,0152,8000,0003),
                 (8000,F0-9F-C2-0C-85-F8,001A,0080,0001)]
        laggport: ixl3 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> state=3d<ACTIVITY,AGGREGATION,SYNC,COLLECTING,DISTRIBUTING>
                [(8000,AC-1F-6B-8D-08-AE,0152,8000,0004),
                 (8000,F0-9F-C2-0C-85-F8,001A,0080,0002)]
        groups: lagg
        media: Ethernet autoselect
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>


I was hoping I might be able to get some input from the community as to what I can do to squeeze the last few Gbps out of this LAG!