OPNsense Forum

Archive => 22.1 Legacy Series => Topic started by: bolmsted on December 31, 2021, 07:04:19 pm

Title: TCP BBR congestion control in OPNsense with FreeBSD 13
Post by: bolmsted on December 31, 2021, 07:04:19 pm

I emailed the project directly but they said they don't have a lot of resource to look at this but perhaps I can help contribute this change if we can vet it for the OPNsense project.....


This is what I wrote to the project@opensense.org email
_____________________
With OPNsense moving to FreeBSD 13.1 in the new year can you look at enabling TCP BBR congestion control kernel module in the OPNsense project so that people can rate limit their connections using BBR?

Apparently this option in Linux has had great results in people being able to get their full Gigabit internet upload and download speeds because the network interface floods the ONT interface otherwise and Linux users have been using BBR to allow the full data rates.

I use OPNsense having previously used pfSense on FreeBSD 11.x base and that project hasn't kept up with the latest FreeBSD changes.  I don't want to switch to a linux firewall project because they don't have an easy to use interface as seen in pfSense and OPNsense.

I don’t think it is anything more than turning some default options and make options in the kernel config to enable the ability to use it as it still requires a sysctl option to turn on     

Just like to avoid having to redo this work if I get it to work every time there is a new release of OPNsense

Perhaps if I get it working after the 22.1 release is out I can provide the steps to OPNsense project for inclusion if it is a simple step and the OPNsense GUI can be enhanced later to allow end user selection of the congestion control algorithm including FQ codel stuff

Can you provide your kernel config file for the 22.1 FreeBSD target?

This is the page referencing TCP BBR for FreeBSD FYI.    I understand it is just these added to kernel configuration file at build but let me test and get back to project

makeoptions WITH_EXTRA_TCP_STACKS=1
options TCPHPTS

https://lists.freebsd.org/pipermail/freebsd-current/2020-April/075930.html

If you can point me to the appropriate forum/relevant section of forum I can also report back my results via the forum and link back to your project when it is proven


As far as real kernel modules I know someone in another forum that developed some changes to the bxe driver to allow syncing at 2.5Gbps but not sure if he submitted upstream to FreeBSD and Linux community (bnx2x) but that’s relatively easy to build at different kernel releases.   I can approach FreeBSD project about incorporating those changes


Title: Re: TCP BBR congestion control in OPNsense with FreeBSD 13
Post by: bolmsted on December 31, 2021, 07:24:01 pm

I've been playing with this since I wrote this email to the project last night and posted my original email to them moments ago but I have TCP BBR congestion control enabled in a FreeBSD 13.0 VM and I assume it would be just as easy to migrate to FreeBSD 13.1 if the kernel version is significant enough.

Looks like it is a matter of adding
makeoptions  WITH_EXTRA_TCP_STACKS=1
options              TCPHPTS

to the kernel configuration file and rebuilding the kernel and installing it via "make buildkernel KERNCONF=KERNEL-BBR" and "make buildworld KERNCONF=KERNEL-BBR" followed by "make installkernel KERNCONF=KERNEL-BBR" and "make installworld KERNCONF=KERNEL-BBR"
https://docs.freebsd.org/en/books/handbook/cutting-edge/index.html#makeworld
https://lists.freebsd.org/pipermail/freebsd-current/2020-April/075930.html

After that it was a matter of
kldload tcp_bbr
sudo sysctl net.inet.tcp.functions_default=bbr

or to make it permanent add
tcp_bbr_load="YES" to /boot/loader.conf.local
net.inet.tcp.functions_default=bbr to /etc/sysctl.conf

looks like BBR loads into the kernel correctly
=========================================================
Attempting to load tcp_bbr
tcp_bbr is now available
Attempting to load tcp_bbr
tcp_bbr is now available
vm.uma.tcp_bbr_pcb.stats.xdomain: 0
vm.uma.tcp_bbr_pcb.stats.fails: 0
vm.uma.tcp_bbr_pcb.stats.frees: 5
vm.uma.tcp_bbr_pcb.stats.allocs: 10
vm.uma.tcp_bbr_pcb.stats.current: 5
vm.uma.tcp_bbr_pcb.domain.0.wss: 0
vm.uma.tcp_bbr_pcb.domain.0.imin: 0
vm.uma.tcp_bbr_pcb.domain.0.imax: 0
vm.uma.tcp_bbr_pcb.domain.0.nitems: 0
vm.uma.tcp_bbr_pcb.limit.bucket_max: 18446744073709551615
vm.uma.tcp_bbr_pcb.limit.sleeps: 0
vm.uma.tcp_bbr_pcb.limit.sleepers: 0
vm.uma.tcp_bbr_pcb.limit.max_items: 0
vm.uma.tcp_bbr_pcb.limit.items: 0
vm.uma.tcp_bbr_pcb.keg.domain.0.free_items: 0
vm.uma.tcp_bbr_pcb.keg.domain.0.pages: 8
vm.uma.tcp_bbr_pcb.keg.efficiency: 91
vm.uma.tcp_bbr_pcb.keg.reserve: 0
vm.uma.tcp_bbr_pcb.keg.align: 63
vm.uma.tcp_bbr_pcb.keg.ipers: 9
vm.uma.tcp_bbr_pcb.keg.ppera: 2
vm.uma.tcp_bbr_pcb.keg.rsize: 832
vm.uma.tcp_bbr_pcb.keg.name: tcp_bbr_pcb
vm.uma.tcp_bbr_pcb.bucket_size_max: 254
vm.uma.tcp_bbr_pcb.bucket_size: 16
vm.uma.tcp_bbr_pcb.flags: 0x810000<VTOSLAB,FIRSTTOUCH>
vm.uma.tcp_bbr_pcb.size: 832
vm.uma.tcp_bbr_map.stats.xdomain: 0
vm.uma.tcp_bbr_map.stats.fails: 0
vm.uma.tcp_bbr_map.stats.frees: 4971
vm.uma.tcp_bbr_map.stats.allocs: 4973
vm.uma.tcp_bbr_map.stats.current: 2
vm.uma.tcp_bbr_map.domain.0.wss: 348
vm.uma.tcp_bbr_map.domain.0.imin: 126
vm.uma.tcp_bbr_map.domain.0.imax: 126
vm.uma.tcp_bbr_map.domain.0.nitems: 126
vm.uma.tcp_bbr_map.limit.bucket_max: 18446744073709551615
vm.uma.tcp_bbr_map.limit.sleeps: 0
vm.uma.tcp_bbr_map.limit.sleepers: 0
vm.uma.tcp_bbr_map.limit.max_items: 0
vm.uma.tcp_bbr_map.limit.items: 0
vm.uma.tcp_bbr_map.keg.domain.0.free_items: 0
vm.uma.tcp_bbr_map.keg.domain.0.pages: 26
vm.uma.tcp_bbr_map.keg.efficiency: 96
vm.uma.tcp_bbr_map.keg.reserve: 0
vm.uma.tcp_bbr_map.keg.align: 7
vm.uma.tcp_bbr_map.keg.ipers: 31
vm.uma.tcp_bbr_map.keg.ppera: 1
vm.uma.tcp_bbr_map.keg.rsize: 128
vm.uma.tcp_bbr_map.keg.name: tcp_bbr_map
vm.uma.tcp_bbr_map.bucket_size_max: 254
vm.uma.tcp_bbr_map.bucket_size: 126
vm.uma.tcp_bbr_map.flags: 0x10000<FIRSTTOUCH>
vm.uma.tcp_bbr_map.size: 128
net.inet.tcp.bbr.clrlost: 0
net.inet.tcp.bbr.software_pacing: 5
net.inet.tcp.bbr.hdwr_pacing: 0
net.inet.tcp.bbr.enob_no_hdwr_pacing: 0
net.inet.tcp.bbr.enob_hdwr_pacing: 0
net.inet.tcp.bbr.rtt_tlp_thresh: 1
net.inet.tcp.bbr.reorder_fade: 60000000
net.inet.tcp.bbr.reorder_thresh: 2
net.inet.tcp.bbr.bb_verbose: 0
net.inet.tcp.bbr.sblklimit: 128
net.inet.tcp.bbr.resend_use_tso: 0
net.inet.tcp.bbr.data_after_close: 1
net.inet.tcp.bbr.kill_paceout: 10
net.inet.tcp.bbr.error_paceout: 10000
net.inet.tcp.bbr.cheat_rxt: 1
net.inet.tcp.bbr.policer.false_postive_thresh: 100
net.inet.tcp.bbr.policer.loss_thresh: 196
net.inet.tcp.bbr.policer.false_postive: 0
net.inet.tcp.bbr.policer.from_rack_rxt: 0
net.inet.tcp.bbr.policer.bwratio: 8
net.inet.tcp.bbr.policer.bwdiff: 500
net.inet.tcp.bbr.policer.min_pes: 4
net.inet.tcp.bbr.policer.detect_enable: 1
net.inet.tcp.bbr.minrto: 30
net.inet.tcp.bbr.timeout.rxtmark_sackpassed: 0
net.inet.tcp.bbr.timeout.incr_tmrs: 1
net.inet.tcp.bbr.timeout.pktdelay: 1000
net.inet.tcp.bbr.timeout.minto: 1000
net.inet.tcp.bbr.timeout.tlp_retry: 2
net.inet.tcp.bbr.timeout.maxrto: 4
net.inet.tcp.bbr.timeout.tlp_dack_time: 200000
net.inet.tcp.bbr.timeout.tlp_minto: 10000
net.inet.tcp.bbr.timeout.persmax: 1000000
net.inet.tcp.bbr.timeout.persmin: 250000
net.inet.tcp.bbr.timeout.tlp_uses: 3
net.inet.tcp.bbr.timeout.delack: 100000
net.inet.tcp.bbr.cwnd.drop_limit: 0
net.inet.tcp.bbr.cwnd.target_is_unit: 0
net.inet.tcp.bbr.cwnd.red_mul: 1
net.inet.tcp.bbr.cwnd.red_div: 2
net.inet.tcp.bbr.cwnd.red_growslow: 1
net.inet.tcp.bbr.cwnd.red_scale: 20000
net.inet.tcp.bbr.cwnd.do_loss_red: 600
net.inet.tcp.bbr.cwnd.initwin: 10
net.inet.tcp.bbr.cwnd.lowspeed_min: 4
net.inet.tcp.bbr.cwnd.highspeed_min: 12
net.inet.tcp.bbr.cwnd.max_target_limit: 8
net.inet.tcp.bbr.cwnd.may_shrink: 0
net.inet.tcp.bbr.cwnd.tar_rtt: 0
net.inet.tcp.bbr.startup.loss_exit: 1
net.inet.tcp.bbr.startup.low_gain: 25
net.inet.tcp.bbr.startup.gain: 25
net.inet.tcp.bbr.startup.use_lowerpg: 1
net.inet.tcp.bbr.startup.loss_threshold: 2000
net.inet.tcp.bbr.startup.cheat_iwnd: 1
net.inet.tcp.bbr.states.google_exit_loss: 1
net.inet.tcp.bbr.states.google_gets_earlyout: 1
net.inet.tcp.bbr.states.use_cwnd_maindrain: 1
net.inet.tcp.bbr.states.use_cwnd_subdrain: 1
net.inet.tcp.bbr.states.subdrain_applimited: 1
net.inet.tcp.bbr.states.dr_filter_life: 8
net.inet.tcp.bbr.states.rand_ot_disc: 50
net.inet.tcp.bbr.states.ld_mul: 4
net.inet.tcp.bbr.states.ld_div: 5
net.inet.tcp.bbr.states.gain_extra_time: 1
net.inet.tcp.bbr.states.gain_2_target: 1
net.inet.tcp.bbr.states.drain_2_target: 1
net.inet.tcp.bbr.states.drain_floor: 88
net.inet.tcp.bbr.states.startup_rtt_gain: 0
net.inet.tcp.bbr.states.use_pkt_epoch: 0
net.inet.tcp.bbr.states.idle_restart_threshold: 100000
net.inet.tcp.bbr.states.idle_restart: 0
net.inet.tcp.bbr.measure.noretran: 0
net.inet.tcp.bbr.measure.quanta: 3
net.inet.tcp.bbr.measure.min_measure_before_pace: 4
net.inet.tcp.bbr.measure.min_measure_good_bw: 1
net.inet.tcp.bbr.measure.ts_delta_percent: 150
net.inet.tcp.bbr.measure.ts_peer_delta: 20
net.inet.tcp.bbr.measure.ts_delta: 20000
net.inet.tcp.bbr.measure.ts_can_raise: 0
net.inet.tcp.bbr.measure.ts_limiting: 1
net.inet.tcp.bbr.measure.use_google: 1
net.inet.tcp.bbr.measure.no_sack_needed: 0
net.inet.tcp.bbr.measure.min_i_bw: 62500
net.inet.tcp.bbr.pacing.srtt_div: 2
net.inet.tcp.bbr.pacing.srtt_mul: 1
net.inet.tcp.bbr.pacing.seg_divisor: 1000
net.inet.tcp.bbr.pacing.utter_max: 0
net.inet.tcp.bbr.pacing.seg_floor: 1
net.inet.tcp.bbr.pacing.seg_tso_max: 2
net.inet.tcp.bbr.pacing.tso_min: 1460
net.inet.tcp.bbr.pacing.all_get_min: 0
net.inet.tcp.bbr.pacing.google_discount: 10
net.inet.tcp.bbr.pacing.tcp_oh: 1
net.inet.tcp.bbr.pacing.ip_oh: 1
net.inet.tcp.bbr.pacing.enet_oh: 0
net.inet.tcp.bbr.pacing.seg_deltarg: 7000
net.inet.tcp.bbr.pacing.bw_cross: 2896000
net.inet.tcp.bbr.pacing.hw_pacing_delay_cnt: 10
net.inet.tcp.bbr.pacing.hw_pacing_floor: 1
net.inet.tcp.bbr.pacing.hw_pacing_adj: 2
net.inet.tcp.bbr.pacing.hw_pacing_limit: 8000
net.inet.tcp.bbr.pacing.hw_pacing: 0
net.inet.tcp.bbr.probertt.can_use_ts: 1
net.inet.tcp.bbr.probertt.use_cwnd: 1
net.inet.tcp.bbr.probertt.is_ratio: 0
net.inet.tcp.bbr.probertt.can_adjust: 1
net.inet.tcp.bbr.probertt.enter_sets_force: 0
net.inet.tcp.bbr.probertt.can_force: 0
net.inet.tcp.bbr.probertt.drain_rtt: 3
net.inet.tcp.bbr.probertt.filter_len_sec: 6
net.inet.tcp.bbr.probertt.mintime: 200000
net.inet.tcp.bbr.probertt.int: 4000000
net.inet.tcp.bbr.probertt.cwnd: 4
net.inet.tcp.bbr.probertt.gain: 192
bbr                             * bbr                              5
net.inet.tcp.functions_default: bbr
        value:  /boot/kernel/tcp_bbr.ko
=========================================================




I would like to know how to build from their release and just add the options at this point
https://github.com/opnsense/tools

I see their kernel configuration is at https://github.com/opnsense/tools/tree/master/config/22.1 but I'm not sure if I have to grab more than their kernel config "SMP" or what have you to rebuild the kernel and how do I transport it from my development FreeBSD VM over to my OPNsense VM when the time comes after 22.1 has been released?

My current OPNsense shows they are using /usr/obj/usr/src/amd64.amd64/sys/SMP which is the file in https://github.com/opnsense/tools/tree/master/config/22.1 (or 21.7 in my case).

root@OPNsense:~ # uname -a
FreeBSD OPNsense.localdomain 12.1-RELEASE-p21-HBSD FreeBSD 12.1-RELEASE-p21-HBSD #0  04bde01a034(stable/21.7)-dirty: Mon Dec 13 09:07:56 CET 2021     root@sensey:/usr/obj/usr/src/amd64.amd64/sys/SMP  amd64
root@OPNsense:~ #

So like to know how to rebuild the OPNsense kernel and move it over to replace the current one?


Title: Re: TCP BBR congestion control in OPNsense with FreeBSD 13
Post by: bolmsted on December 31, 2021, 07:27:01 pm

My current FreeBSD 13.0 VM is running custom kernel called KERNEL-BBR
root@freebsd:~ # uname -a
FreeBSD freebsd 13.0-RELEASE FreeBSD 13.0-RELEASE #1: Fri Dec 31 12:29:03 EST 2021     root@freebsd:/usr/obj/usr/src/amd64.amd64/sys/KERNEL-BBR  amd64
root@freebsd:~ #

so it is a matter of of incorporating the OPNSense FreeBSD setup with these options and rebuilding and transporting to the my OPNsense VM later but if we can incorporate this into the master branch then I can avoid this if I can prove it works for my setup in early 2022 when 22.1 comes out.
Title: Re: TCP BBR congestion control in OPNsense with FreeBSD 13
Post by: Patrick M. Hausen on December 31, 2021, 07:45:23 pm
In which way do you think a firewall system could benefit from BBR? Specifically a packet filter firewall?

BBR needs to be implemented on the end node where the TCP connection terminates. Not on an intermediate system that just forwards packets without (most of the time) touching TCP at all.

When people claim huge improvements on their Linux systems, they are referring to their servers, not their firewalls, most probably.
Title: Re: TCP BBR congestion control in OPNsense with FreeBSD 13
Post by: bolmsted on December 31, 2021, 08:19:27 pm
Hello @pmhausen

There are a number of users in the DSL Reports forum (and a spun off Slack/discord forum) that indicate that using TCP BBR congestion control on their Linux Firewalls (which have had BBR support for a long time in the kernel; only recently added to FreeBSD) has increased download/upload speeds of their Linux firewalls.

Since the incoming traffic from the internal network tends to flood the GPON ONT upstream on the upstream fibre network and then they don't get the full upload speed.   They are using BBR in order to limit the traffic inbound from their network to allow them get the full upload speeds.


=======================
JAMESMTL  9:22 PM
at 2500 you can exceed the provisioned upload rate causing packet loss, packet retransmissions, and performance degradation.
9:25
This is what it looks like for me under various conditions. performance loss is affected by latency.
1000 (No Throttle)      Latency   Down   Up
Bell Alliant - Halifax (3907)      24   792   919
Bell Canada - Montreal (17567)      1   901   933
Bell Canada - Toronto (17394)      8   765   933
Bell Mobility - Winnepeg (17395)   30   874   913
Bell Mobility - Calgary (17399)      47   782   898
            
2500 (No Throttle)      Latency   Down   Up
Bell Alliant - Halifax (3907)      24   1564   358
Bell Canada - Montreal (17567)      1   1638   933
Bell Canada - Toronto (17394)      8   1619   771
Bell Mobility - Winnepeg (17395)   30   1569   324
Bell Mobility - Calgary (17399)      47   1570   236
            
2500 (Throttled 1000 up)      Latency   Down   Up
Bell Alliant - Halifax (3907)      24   1558   915
Bell Canada - Montreal (17567)      1   1650   931
Bell Canada - Toronto (17394)      8   1620   931
Bell Mobility - Winnepeg (17395)   30   1569   885
Bell Mobility - Calgary (17399)      47   1600   890
            
2500 (Throttled 1000 up +bbr )      Latency   Down   Up
Bell Alliant - Halifax (3907)      24   1541   922
Bell Canada - Montreal (17567)      1   1639   940
Bell Canada - Toronto (17394)      8   1583   936
Bell Mobility - Winnepeg (17395)   30   1617   928
Bell Mobility - Calgary (17399)      47   1547   916
=======================


I'll see if I can find more details.
Title: Re: TCP BBR congestion control in OPNsense with FreeBSD 13
Post by: bolmsted on January 01, 2022, 12:06:22 am

The idea is to use BBR for congestion control like FQ Codel is used here
https://www.youtube.com/watch?v=iXqExAALzR8&ab_channel=LawrenceSystems
Title: Re: TCP BBR congestion control in OPNsense with FreeBSD 13
Post by: johndchch on January 01, 2022, 07:17:49 am
Since the incoming traffic from the internal network tends to flood the GPON ONT upstream on the upstream fibre network and then they don't get the full upload speed.   They are using BBR in order to limit the traffic inbound from their network to allow them get the full upload speeds.

whilst I have seen this on my own 900/500 fibre connection, it's easily fixed using the shaper ( without shaper enabled on upload I max out at 350, with it on I get the full 500 )



Title: Re: TCP BBR congestion control in OPNsense with FreeBSD 13
Post by: Patrick M. Hausen on January 02, 2022, 03:05:26 pm
But FQ CoDel is applied at packed forwarding, so this makes sense on the firewall. BBR is applied at the TCP layer and the firewall does not do TCP unless you use e.g. the HTTP proxy. All documentation I could find about BBR being successfully implemented on Linux systems strictly referred to end nodes, not to firewalls/routers.