I am curious if I am seeing this kernel problem on my bare-metal install. I have a passively cooled mini PC with 4 Intel NICs and a J1900 CPU at 2.00GHz and 4 GB of RAM. I know this CPU is fairly old, but the hardware sizing guide says I should be able to do 350-750 Mbit/s throughput. When I have no firewall rules enabled and the default IPS settings I get about 370-380 Mbit/s of my 400 Mbit/s inbound speed. If I enable firewall rules to set up fq_codel, then it drops my throughput to 320-340 Mbit/s. In both of these scenarios I see my CPU going up to 90+% on one thread. I do understand that my throughput will go down with different options like IPS and firewall rules, but I would think that with no other options running this hardware should be able to do better than 380 Mbit/s tops.
Quote from: DiHydro on February 11, 2021, 09:40:20 pmI am curious if I am seeing this kernel problem on my bare-metal install. I have a passively cooled mini PC with 4 Intel NICs and a J1900 CPU at 2.00GHz and 4 GB of RAM. I know this CPU is fairly old, but the hardware sizing guide says I should be able to do 350-750 Mbit/s throughput. When I have no firewall rules enabled and the default IPS settings I get about 370-380 Mbit/s of my 400 Mbit/s inbound speed. If I enable firewall rules to set up fq_codel, then it drops my throughput to 320-340 Mbit/s. In both of these scenarios I see my CPU going up to 90+% on one thread. I do understand that my throughput will go down with different options like IPS and firewall rules, but I would think that with no other options running this hardware should be able to do better than 380 Mbit/s tops.Using FQ_Codel or IPS are more secondary to the overall discussion here. Both of these will consume a large amount of CPU cycles and won't illustrate the true throughput capabilities of the firewall due to their own inherent overhead.I run a J3455 with a quad port Intel I340 NIC, and can easily push 1gigabit with the stock ruleset and have plenty of CPU overhead remaining. This unit can also enable FQ_Codel on WAN and still push 1gigabit, although CPU usage does increase around 20% at 1gigabit speeds.I don't personally run any of the IPS components so I don't have any direct feedback on that. It's worth noting that both of these tests are done on a traditional DHCP WAN connection. If you're using PPPoE, that will be single thread bound and will limit your throughput to the maximum speed of a single core.What most of the transfer speed tests are illustrating here are that FreeBSD seems to have very poor scaling when using 10gbit virtualized NICs and forwarding packets. This isn't an OPNsense induced issue, more of an issue that OPNsense gets stuck with due to the poor upstream support from FreeBSD. For the vast majority of users on 1gigabit or lower connections, this won't be a cause for concern in the near future.
Quote from: DiHydro on February 11, 2021, 09:40:20 pmI am curious if I am seeing this kernel problem on my bare-metal install. I have a passively cooled mini PC with 4 Intel NICs and a J1900 CPU at 2.00GHz and 4 GB of RAM. I know this CPU is fairly old, but the hardware sizing guide says I should be able to do 350-750 Mbit/s throughput. When I have no firewall rules enabled and the default IPS settings I get about 370-380 Mbit/s of my 400 Mbit/s inbound speed. If I enable firewall rules to set up fq_codel, then it drops my throughput to 320-340 Mbit/s. In both of these scenarios I see my CPU going up to 90+% on one thread. I do understand that my throughput will go down with different options like IPS and firewall rules, but I would think that with no other options running this hardware should be able to do better than 380 Mbit/s tops.I wonder what throughput you would receive with a Linux based fw just to see what the hardware is capable of. I made the experience with the current opnsense 21.1 release that it gives me only ~50% throughput after performance tuning in a virtualized environment. A quick test with virtualized openwrt gave me full gigabit wire speed without any optimization needed. I know that's comparing apples and oranges but it's difficult to say what a hardware platform is capable of if you don't try different things.
I am going to try this in a day or two. IPfire is my choice right now, unless someone has a different suggestion. I will probably come back to OPNsense either way as I like this community and the project.
So I put OPNsense on a PC that has an Intel PRO/1000 4 port NIC and an i7 2600, and with a default install I get my 450 mibt/s. Once I put a firewall rule in to enable fq_codel, then it drops to 360-380 mbit/s. I don't believe that an i7 at 3.4 GHz with an Intel NIC cannot handle these rules at full speed. What is wrong/what can I look at/how can I help make this better?
I have exactly the same problem. Apparently there are problems with vmxnet3 vNIC here. It's sad but I can't get higher than 1.4 Gbps. Please don't come to me with hardware. Sorry folks, it's 2021. 10gbps is what every FW should be able to do by default. Opnsense is a wonderful product. But I think you are betting on a dead horse. Why not use Linux as OS? FreeBSD slept through the virtual world (see the s... vmxnet3 support and bugs). Now I'm out of my frustration and go back to work .