Always a pleasure. This score looks great. I *guess* this can easily do over 1 Gbps (may be 1.5-2bps), provided that ethernet driver does not have problems with netmap. Reading the product specs from here https://shop.opnsense.com/wp-content/uploads/2021/02/BROCHURE-DEC840_50.pdf, I see that IPS throughput is around 2 Gbps. IPS and Sensei use the same packet interface so I'd guess that with 16GB RAM you can easily serve a school campus with around 1000 devices and 1+ Gbps WAN.
10 Gbps throughput support does not require specialized hardware. If your 10GbE adapter supports RSS (Receive Side Scaling) - and almost all 10GbE ethernets already do - you're good to go. Just make sure that your adapter is playing well with netmap. Having said that, since default RSS configurations do not provide "symmetric" hashing, that is something we need to talk about with the OPNsense team. It's a relatively straightforward development, but still means a bit of deviation from the upstream kernel source.
Hi @athurdent, we had an email exchange about this with the OPNsense team a while ago. And it looks like the team is very close to getting RSS in the kernel https://twitter.com/opnsense/status/1425402746602246150After it hits one of the OPNsense releases, we'll go ahead and enable multi-core support for Sensei.
After it hits one of the OPNsense releases, we'll go ahead and enable multi-core support for Sensei.
src: include RSS kernel support defaulting to off
root@OPNsense:~ # netstat -QConfiguration:Setting Current LimitThread count 4 4Default queue limit 256 10240Dispatch policy direct n/aThreads bound to CPUs enabled n/aProtocols:Name Proto QLimit Policy Dispatch Flagsip 1 1000 cpu hybrid C--igmp 2 256 source default ---rtsock 3 256 source default ---arp 4 256 source default ---ether 5 256 cpu direct C--ip6 6 256 cpu hybrid C--ip_direct 9 256 cpu hybrid C--ip6_direct 10 256 cpu hybrid C--Workstreams:WSID CPU Name Len WMark Disp'd HDisp'd QDrops Queued Handled 0 0 ip 0 360 0 671367 0 198087 869454 0 0 igmp 0 0 0 0 0 0 0 0 0 rtsock 0 0 0 0 0 0 0 0 0 arp 0 0 0 0 0 0 0 0 0 ether 0 0 818270 0 0 0 818270 0 0 ip6 0 2 0 175 0 344 519 0 0 ip_direct 0 0 0 0 0 0 0 0 0 ip6_direct 0 0 0 0 0 0 0 1 1 ip 0 188 0 1120895 0 23110 1144005 1 1 igmp 0 0 0 0 0 0 0 1 1 rtsock 0 0 0 0 0 0 0 1 1 arp 0 0 1670 0 0 0 1670 1 1 ether 0 0 1209891 0 0 0 1209891 1 1 ip6 0 2 0 763 0 359 1122 1 1 ip_direct 0 0 0 0 0 0 0 1 1 ip6_direct 0 0 0 0 0 0 0 2 2 ip 0 298 0 833862 0 21968 855830 2 2 igmp 0 0 0 0 0 0 0 2 2 rtsock 0 0 0 0 0 0 0 2 2 arp 0 0 6 0 0 0 6 2 2 ether 0 0 841523 0 0 0 841523 2 2 ip6 0 2 0 248 0 715 963 2 2 ip_direct 0 0 0 0 0 0 0 2 2 ip6_direct 0 0 0 0 0 0 0 3 3 ip 0 921 0 2282494 0 121993 2404487 3 3 igmp 0 0 0 0 0 0 0 3 3 rtsock 0 5 0 0 0 186 186 3 3 arp 0 0 537 0 0 0 537 3 3 ether 0 0 2359454 0 0 0 2359454 3 3 ip6 0 2 0 1348 0 418 1766 3 3 ip_direct 0 0 0 0 0 0 0 3 3 ip6_direct 0 0 0 0 0 0 0
@athurdent, thanks for the heads-up. I've just confirmed this with Franco. We'll be running our tests in the following week. Note: team's agenda is quite filled with Shaping and TLS work, expect this to land in a production release later on. (We'll send you a test binary though
Yes If anyone can share the ubench -cs output I can provide a throughput estimate.