Hi @aimdev, thanks. Do you have Suricata (in IPS mode) on WAN? If so, that's good. Yours is another testimony that this particular chipset works.
Thanks for looking into this @mbI have put the 20.7 proxmox virtual instance of OPNsense inline with PPPoE by backing up my 20.1 config and restoring to 20.7. Really easy to do and was up and running quickly.Suricata runs successfully and alerts on the LAN interface (Virtio)Switched to run on the WAN interface and am not receiving alerts. The new log view with v5 Suricata is different and seems to cycle with this when trying to establish IPS on the PPPoE WAN Counter | TM Name | Value ------------------------------------------------------------------------------------ Date: 6/14/2020 -- 08:49:09 (uptime: 0d, 00h 08m 17s) ------------------------------------------------------------------------------------ flow.memuse | Total | 7154304 tcp.reassembly_memuse | Total | 196608 tcp.memuse | Total | 1146880 flow_mgr.rows_skipped | Total | 65536 flow_mgr.rows_checked | Total | 65536 flow.spare | Total | 10000 ------------------------------------------------------------------------------------ Counter | TM Name | Value ------------------------------------------------------------------------------------ Date: 6/14/2020 -- 08:49:01 (uptime: 0d, 00h 08m 09s) ------------------------------------------------------------------------------------ flow.memuse | Total | 7154304 tcp.reassembly_memuse | Total | 196608 tcp.memuse | Total | 1146880 flow_mgr.rows_skipped | Total | 65536 flow_mgr.rows_checked | Total | 65536 flow.spare | Total | 10000 ------------------------------------------------------------------------------------I am getting the ifconfig -a to you via PMHere is the log when it successfully runs on the LAN interface flow.memuse | Total | 7177096 tcp.reassembly_memuse | Total | 231424 tcp.memuse | Total | 1146880 flow_mgr.rows_maxlen | Total | 1 flow_mgr.rows_skipped | Total | 65534 flow_mgr.rows_checked | Total | 65536 flow_mgr.flows_notimeout | Total | 2 flow_mgr.flows_checked | Total | 2 flow.spare | Total | 10000 flow_mgr.new_pruned | Total | 9 app_layer.flow.failed_udp | Total | 40 app_layer.tx.dns_udp | Total | 20 app_layer.flow.dns_udp | Total | 6 app_layer.flow.failed_tcp | Total | 4 app_layer.tx.dhcp | Total | 2 app_layer.flow.dhcp | Total | 1 app_layer.flow.tls | Total | 3 tcp.overlap | Total | 1 tcp.rst | Total | 16 tcp.synack | Total | 9 tcp.syn | Total | 9 tcp.sessions | Total | 9 flow.udp | Total | 47 flow.tcp | Total | 39 decoder.max_pkt_size | Total | 1514 decoder.avg_pkt_size | Total | 474 decoder.udp | Total | 320 decoder.tcp | Total | 988 decoder.ethernet | Total | 1594 decoder.ipv6 | Total | 2 decoder.ipv4 | Total | 1306 decoder.bytes | Total | 756423 decoder.pkts | Total | 1594 capture.kernel_packets | Total | 1594 ------------------------------------------------------------------------------------ Counter | TM Name | Value ------------------------------------------------------------------------------------ Date: 6/14/2020 -- 09:10:57 (uptime: 0d, 00h 01m 52s) -----------------------------------------------------------------------------------
A quick update: I think this vmx bug has been resolved on FreeBSD 12-STABLE:https://svnweb.freebsd.org/base?view=revision&revision=363163Let's do some tests and I'll post some results. If anyone out there is able to test 12/STABLE kernel with an ESX vmx interface, that'd also be great
netmap with 20.7 release for vnet driver (virtio) is working, the kernel panic is gone.
Quote from: Voodoo on August 03, 2020, 12:34:13 pmnetmap with 20.7 release for vnet driver (virtio) is working, the kernel panic is gone.I still do observe pagefaults with virtio vtnet interfaces...
For those who are using Suricata/Sensei on VLANs on em(4)experiencing vmx crashexperiencing vtnet crashwant to have Suricata on PPPoE / OpenVPN interfacesI have an (unofficial) test kernel ready. Please PM me if you'd like to give it a try.PS: vmx patch fixes the kernel crash, but has an outstanding issue. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248494