Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - bolmsted

#1
23.7 Legacy Series / Re: OPNsense VM just dying
January 28, 2024, 03:14:57 PM

Of note in the other thread

Quote
Aug 7, 2023
#32
showiproute said:
Doubling everything seems to solve the problem also.
My configuration now includes queue=8 again and the VM didn't crash.

Question is why was it running with PVE7.x while PVE8 needs some manual config changes.

Fiona (Proxmox Staff Member)
QEMU might've changed internal things and uses more file descriptors now. And the OPNSense update might've changed interaction with QEMU and the host too. Likely, you were already near the limit before, but didn't quite hit it.

#2
23.7 Legacy Series / Re: OPNsense VM just dying
January 28, 2024, 03:08:51 PM

I can try setting syslog to one of my Ubuntu VMs sitting on the proxmox host but can't do a lot of fiddling as it is our home internet gateway and the natives get restless especially because we both WFH     The NOFILE limit makes sense so started with that from recommendations but willl see if syslog gives any clues if this happens again
#3
23.7 Legacy Series / Re: OPNsense VM just dying
January 28, 2024, 02:45:57 AM
I'm on 23.7.12 running on Proxmox 8.1.4.   I adjusted the limits to 4096 open files (from default of 1024) and increased queues=8 on the network interfaces but not sure what specific settings you are referring to.

Quote
root@proxmox:/var/log# for pid in $(pidof kvm); do prlimit -p $pid | grep NOFILE; ls -1 /proc/$pid/fd/ | wc -l; done
NOFILE     max number of open files                4096    524288 files
297
NOFILE     max number of open files                4096    524288 files
46
NOFILE     max number of open files                4096    524288 files
46
NOFILE     max number of open files                4096    524288 files
46
NOFILE     max number of open files                4096    524288 files
41
root@proxmox:/var/log#

I copied /etc/security/limits.conf to /etc/security/limits.d/limits.conf and edited to add the line for NOFILE as the numbers in the for loop execution were 1024 prior to changing and rebooting proxmox
Quote
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#root            hard    core            100000
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#ftp             -       chroot          /ftp
#@student        -       maxlogins       4

root      soft    nofile      4096

# End of file
root@proxmox:/etc/security/limits.d#


Changed the vmbr# interface setting for queues=4 to queues=8 as per one of the posts and will monitor to see if the system stays stable for much longer.  I guess if I haven't come back to this post in 6 months asking for another fix I guess it is working but just tried the first suggestion from the search results.


Looks like I edited /etc/systemd/system.conf as I recall copying the system.conf to /root as I didn't want to leave a copy in the directory in case it got read by the OS
Quote
root@proxmox:/etc/security/limits.d# diff /etc/systemd/system.conf /root/system.conf.20240126
67,70d66
< #
< #added to see if prevents OPNsense VM from hanging/crashing
< DefaultLimitNOFILE=4096:524288
< #
root@proxmox:/etc/security/limits.d#

This is good for documenting what I did too if this comes back to haunt me later.


In case the search results change from 2nd poster I went to this page from search results
https://forum.proxmox.com/threads/opnsense-keeps-crashing.131601/
and then to linked page for modifying the system.conf and creating limits.conf
https://forum.proxmox.com/threads/qemu-crash-with-vzdump.131603/#post-578351
#4
23.7 Legacy Series / Re: OPNsense VM just dying
January 25, 2024, 11:14:02 PM
I of course searched online but different search engine so will see if any of these solve problem.   Looks like someone suggested updating limits.conf and possibly the queues so will look into this but it may take time to realize if any fix happens as it is every few months.   Just thought this might have popped up as a known issue.
#5
23.7 Legacy Series / OPNsense VM just dying
January 25, 2024, 05:57:35 PM

Over the last couple of months my OPNsense VM is just dying or possibly crashing inside my Proxmox environment and I have to recycle the VM and we loose our internet when the VM hangs/dying/crashes happens.   

This has happened a number of times over the last couple of months just spontaneously and I've tried to patch to the latest patches for OPNsense when this has happened but still crashing sporadically and most recently today.

Any idea what is going and which logs in FreeBSD would help narrow down what is causing this?

It was working for the last 1-1.5 years without issues and suddenly OPNsense is just crashing
#6
Seems like similar symptoms as here
https://forum.opnsense.org/index.php?topic=26602.0

I'm also running in Proxmox using Virtio so I will be paying attention to this thread if it gets traction.
#7
Quote from: johndchch on January 30, 2022, 09:05:03 AM
Quote from: bolmsted on January 30, 2022, 04:17:58 AM

A few of us in Bell Canada land are noticing the same thing and suspect its VLAN on WAN as I'm virtualizing OPNsense on Proxmox (using virtio as I switched from passthrough of bxe0) and someone else is running on baremetal.

I'm on vlan tagged wan as well - however I'm having my switch handle the tagging so opnsense doesn't have to handle it. If you've got a suitable switch should be easy to test this and see if it cures your issue

Well we are using SFP ports for our internet connection so if I were to buy a Ubiquiti switch that support 2.5Gbps and cost $800+ I probably wouldn't be using OPNsense to do this  :(

I was working before the 22.1 upgrade and I'm doing this on a fresh install
#8

A few of us in Bell Canada land are noticing the same thing and suspect its VLAN on WAN as I'm virtualizing OPNsense on Proxmox (using virtio as I switched from passthrough of bxe0) and someone else is running on baremetal.
#9

I'm currently in the process of tweaking my internet connection to use Baby Jumbo Frames (RFC4638) to set the MTU to 1508 bytes on the physical/virtual ethernet connections that are the underlying "hardware" for a PPPoE interface (MTU set to 1500 byes so full ethernet frame can be passed without fragmenting).

However, I'm noticing that my VLAN interface to the ISP  (my ISP requires that the PPPoE be on a tagged VLAN) has a MTU setting of 1508 in the OPNsense GUI but at the OS level in FreeBSD (Hardened BSD currently but potatoes / tomatoes) the MTU is showing up as 1500.

I'm wondering if this should be considered a bug?    It seems that my pings with MTU 1472 still get through to sites like 1.1.1.1 / 9.9.9.9  (google is doing something funky at 8.8.8.8 ) but it seems appropriate that I may have to use the syshook for "startup" as as a WORKAROUND to force the ifconfig vtnet2_vlan35 to MTU 1508

https://docs.opnsense.org/development/backend/autorun.html
https://forum.opnsense.org/index.php?topic=16159.0

_______________________________________________
I'm virtualizing the OPNsense environment in QEMU/KVM via Proxmox.

Proxmox

root@proxmox:~# ifconfig enp1s0f0 | grep mtu
enp1s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1508
root@proxmox:~# ifconfig vmbr2 | grep mtu
vmbr2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1508
root@proxmox:~#


OPNsense

root@OPNsense:~ # ifconfig vtnet2 | grep mtu
vtnet2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1508
root@OPNsense:~ # ifconfig vtnet2_vlan35 | grep mtu
vtnet2_vlan35: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500    <------ 1508?
root@OPNsense:~ # ifconfig pppoe0 | grep mtu
pppoe0: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 0 mtu 1500
root@OPNsense:~ #
root@OPNsense:~ # ping -D -s 1472 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 1472 data bytes
1480 bytes from 1.1.1.1: icmp_seq=0 ttl=56 time=15.369 ms
1480 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=15.325 ms
1480 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=15.203 ms
1480 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=15.326 ms
1480 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=15.681 ms
1480 bytes from 1.1.1.1: icmp_seq=5 ttl=56 time=15.238 ms
^C
--- 1.1.1.1 ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 15.203/15.357/15.681/0.156 ms
root@OPNsense:~ #

root@OPNsense:~ # ping -D -s 1472 9.9.9.9
PING 9.9.9.9 (9.9.9.9): 1472 data bytes
1480 bytes from 9.9.9.9: icmp_seq=0 ttl=53 time=47.753 ms
1480 bytes from 9.9.9.9: icmp_seq=1 ttl=53 time=47.782 ms
1480 bytes from 9.9.9.9: icmp_seq=2 ttl=53 time=47.756 ms
1480 bytes from 9.9.9.9: icmp_seq=3 ttl=53 time=47.717 ms
1480 bytes from 9.9.9.9: icmp_seq=4 ttl=53 time=47.807 ms
^C
--- 9.9.9.9 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 47.717/47.763/47.807/0.030 ms
root@OPNsense:~ #

root@OPNsense:~ # ping -D -s 1472 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 1472 data bytes
76 bytes from 8.8.8.8: icmp_seq=0 ttl=116 time=1.770 ms
wrong total length 96 instead of 1500
76 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=1.914 ms
wrong total length 96 instead of 1500
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.770/1.842/1.914/0.072 ms
root@OPNsense:~ #






Also the question came up in another forum while troubleshooting this if setting the MRU/MTU to 1508 is needed if the max-payload is set to 1500 for PPPoE interface but it seems I was getting issues when MRU wasn't being set or perhaps I was being thrown off by the google's 8.8.8.8 which is my go to for testing stuff when I was initially trying to set this up. 

I'm wondering if this max-payload is being set in the PPPoE interface MTU setting where it calculates it from 1508 -> 1500 or 1500 -> 1492.   Looks like the MRU setting is on the vtnet2_vlan35 interface which PPPoE uses in the advanced settings.



set pppoe max-payload 1500 will override mru and mtu
see last part of 5.7 http://mpd.sourceforge.net/doc5/mpd.html


My mpd_want.conf that pppd is using looks like this currently.

startup:
  # configure the console
  set console close
  # configure the web server
  set web close

default:
pppoeclient:
  create bundle static wan
  set bundle enable ipv6cp
  set iface name pppoe0
  set iface route default
  set iface disable on-demand
  set iface idle 0
  set iface enable tcpmssfix
  set iface up-script /usr/local/opnsense/scripts/interfaces/ppp-linkup.sh
  set iface down-script /usr/local/opnsense/scripts/interfaces/ppp-linkdown.sh
  set ipcp ranges 0.0.0.0/0 0.0.0.0/0
  create link static wan_link0 pppoe
  set link action bundle wan
  set link disable multilink
  set link keep-alive 10 60
  set link max-redial 0
  set link disable chap pap
  set link accept chap pap eap
  set link disable incoming
  set pppoe max-payload 1500      <------ is this being set by MTU 1508->1500 on pppoe interface page?
  set link mru 1508                      <------ MRU is being set to 1508 on Link Parameters ( vtnet2_vlan35 ) page
  set auth authname "xxxxxxxx"
  set auth password xxxxxxxx
  set pppoe service ""
  set pppoe iface vtnet2_vlan35
open




Many thanks in advance
#10

The idea is to use BBR for congestion control like FQ Codel is used here
https://www.youtube.com/watch?v=iXqExAALzR8&ab_channel=LawrenceSystems
#11
Hello @pmhausen

There are a number of users in the DSL Reports forum (and a spun off Slack/discord forum) that indicate that using TCP BBR congestion control on their Linux Firewalls (which have had BBR support for a long time in the kernel; only recently added to FreeBSD) has increased download/upload speeds of their Linux firewalls.

Since the incoming traffic from the internal network tends to flood the GPON ONT upstream on the upstream fibre network and then they don't get the full upload speed.   They are using BBR in order to limit the traffic inbound from their network to allow them get the full upload speeds.


=======================
JAMESMTL  9:22 PM
at 2500 you can exceed the provisioned upload rate causing packet loss, packet retransmissions, and performance degradation.
9:25
This is what it looks like for me under various conditions. performance loss is affected by latency.
1000 (No Throttle)      Latency   Down   Up
Bell Alliant - Halifax (3907)      24   792   919
Bell Canada - Montreal (17567)      1   901   933
Bell Canada - Toronto (17394)      8   765   933
Bell Mobility - Winnepeg (17395)   30   874   913
Bell Mobility - Calgary (17399)      47   782   898
            
2500 (No Throttle)      Latency   Down   Up
Bell Alliant - Halifax (3907)      24   1564   358
Bell Canada - Montreal (17567)      1   1638   933
Bell Canada - Toronto (17394)      8   1619   771
Bell Mobility - Winnepeg (17395)   30   1569   324
Bell Mobility - Calgary (17399)      47   1570   236
            
2500 (Throttled 1000 up)      Latency   Down   Up
Bell Alliant - Halifax (3907)      24   1558   915
Bell Canada - Montreal (17567)      1   1650   931
Bell Canada - Toronto (17394)      8   1620   931
Bell Mobility - Winnepeg (17395)   30   1569   885
Bell Mobility - Calgary (17399)      47   1600   890
            
2500 (Throttled 1000 up +bbr )      Latency   Down   Up
Bell Alliant - Halifax (3907)      24   1541   922
Bell Canada - Montreal (17567)      1   1639   940
Bell Canada - Toronto (17394)      8   1583   936
Bell Mobility - Winnepeg (17395)   30   1617   928
Bell Mobility - Calgary (17399)      47   1547   916
=======================


I'll see if I can find more details.
#12

My current FreeBSD 13.0 VM is running custom kernel called KERNEL-BBR
root@freebsd:~ # uname -a
FreeBSD freebsd 13.0-RELEASE FreeBSD 13.0-RELEASE #1: Fri Dec 31 12:29:03 EST 2021     root@freebsd:/usr/obj/usr/src/amd64.amd64/sys/KERNEL-BBR  amd64
root@freebsd:~ #

so it is a matter of of incorporating the OPNSense FreeBSD setup with these options and rebuilding and transporting to the my OPNsense VM later but if we can incorporate this into the master branch then I can avoid this if I can prove it works for my setup in early 2022 when 22.1 comes out.
#13

I've been playing with this since I wrote this email to the project last night and posted my original email to them moments ago but I have TCP BBR congestion control enabled in a FreeBSD 13.0 VM and I assume it would be just as easy to migrate to FreeBSD 13.1 if the kernel version is significant enough.

Looks like it is a matter of adding
makeoptions  WITH_EXTRA_TCP_STACKS=1
options              TCPHPTS

to the kernel configuration file and rebuilding the kernel and installing it via "make buildkernel KERNCONF=KERNEL-BBR" and "make buildworld KERNCONF=KERNEL-BBR" followed by "make installkernel KERNCONF=KERNEL-BBR" and "make installworld KERNCONF=KERNEL-BBR"
https://docs.freebsd.org/en/books/handbook/cutting-edge/index.html#makeworld
https://lists.freebsd.org/pipermail/freebsd-current/2020-April/075930.html

After that it was a matter of
kldload tcp_bbr
sudo sysctl net.inet.tcp.functions_default=bbr

or to make it permanent add
tcp_bbr_load="YES" to /boot/loader.conf.local
net.inet.tcp.functions_default=bbr to /etc/sysctl.conf

looks like BBR loads into the kernel correctly
=========================================================
Attempting to load tcp_bbr
tcp_bbr is now available
Attempting to load tcp_bbr
tcp_bbr is now available
vm.uma.tcp_bbr_pcb.stats.xdomain: 0
vm.uma.tcp_bbr_pcb.stats.fails: 0
vm.uma.tcp_bbr_pcb.stats.frees: 5
vm.uma.tcp_bbr_pcb.stats.allocs: 10
vm.uma.tcp_bbr_pcb.stats.current: 5
vm.uma.tcp_bbr_pcb.domain.0.wss: 0
vm.uma.tcp_bbr_pcb.domain.0.imin: 0
vm.uma.tcp_bbr_pcb.domain.0.imax: 0
vm.uma.tcp_bbr_pcb.domain.0.nitems: 0
vm.uma.tcp_bbr_pcb.limit.bucket_max: 18446744073709551615
vm.uma.tcp_bbr_pcb.limit.sleeps: 0
vm.uma.tcp_bbr_pcb.limit.sleepers: 0
vm.uma.tcp_bbr_pcb.limit.max_items: 0
vm.uma.tcp_bbr_pcb.limit.items: 0
vm.uma.tcp_bbr_pcb.keg.domain.0.free_items: 0
vm.uma.tcp_bbr_pcb.keg.domain.0.pages: 8
vm.uma.tcp_bbr_pcb.keg.efficiency: 91
vm.uma.tcp_bbr_pcb.keg.reserve: 0
vm.uma.tcp_bbr_pcb.keg.align: 63
vm.uma.tcp_bbr_pcb.keg.ipers: 9
vm.uma.tcp_bbr_pcb.keg.ppera: 2
vm.uma.tcp_bbr_pcb.keg.rsize: 832
vm.uma.tcp_bbr_pcb.keg.name: tcp_bbr_pcb
vm.uma.tcp_bbr_pcb.bucket_size_max: 254
vm.uma.tcp_bbr_pcb.bucket_size: 16
vm.uma.tcp_bbr_pcb.flags: 0x810000<VTOSLAB,FIRSTTOUCH>
vm.uma.tcp_bbr_pcb.size: 832
vm.uma.tcp_bbr_map.stats.xdomain: 0
vm.uma.tcp_bbr_map.stats.fails: 0
vm.uma.tcp_bbr_map.stats.frees: 4971
vm.uma.tcp_bbr_map.stats.allocs: 4973
vm.uma.tcp_bbr_map.stats.current: 2
vm.uma.tcp_bbr_map.domain.0.wss: 348
vm.uma.tcp_bbr_map.domain.0.imin: 126
vm.uma.tcp_bbr_map.domain.0.imax: 126
vm.uma.tcp_bbr_map.domain.0.nitems: 126
vm.uma.tcp_bbr_map.limit.bucket_max: 18446744073709551615
vm.uma.tcp_bbr_map.limit.sleeps: 0
vm.uma.tcp_bbr_map.limit.sleepers: 0
vm.uma.tcp_bbr_map.limit.max_items: 0
vm.uma.tcp_bbr_map.limit.items: 0
vm.uma.tcp_bbr_map.keg.domain.0.free_items: 0
vm.uma.tcp_bbr_map.keg.domain.0.pages: 26
vm.uma.tcp_bbr_map.keg.efficiency: 96
vm.uma.tcp_bbr_map.keg.reserve: 0
vm.uma.tcp_bbr_map.keg.align: 7
vm.uma.tcp_bbr_map.keg.ipers: 31
vm.uma.tcp_bbr_map.keg.ppera: 1
vm.uma.tcp_bbr_map.keg.rsize: 128
vm.uma.tcp_bbr_map.keg.name: tcp_bbr_map
vm.uma.tcp_bbr_map.bucket_size_max: 254
vm.uma.tcp_bbr_map.bucket_size: 126
vm.uma.tcp_bbr_map.flags: 0x10000<FIRSTTOUCH>
vm.uma.tcp_bbr_map.size: 128
net.inet.tcp.bbr.clrlost: 0
net.inet.tcp.bbr.software_pacing: 5
net.inet.tcp.bbr.hdwr_pacing: 0
net.inet.tcp.bbr.enob_no_hdwr_pacing: 0
net.inet.tcp.bbr.enob_hdwr_pacing: 0
net.inet.tcp.bbr.rtt_tlp_thresh: 1
net.inet.tcp.bbr.reorder_fade: 60000000
net.inet.tcp.bbr.reorder_thresh: 2
net.inet.tcp.bbr.bb_verbose: 0
net.inet.tcp.bbr.sblklimit: 128
net.inet.tcp.bbr.resend_use_tso: 0
net.inet.tcp.bbr.data_after_close: 1
net.inet.tcp.bbr.kill_paceout: 10
net.inet.tcp.bbr.error_paceout: 10000
net.inet.tcp.bbr.cheat_rxt: 1
net.inet.tcp.bbr.policer.false_postive_thresh: 100
net.inet.tcp.bbr.policer.loss_thresh: 196
net.inet.tcp.bbr.policer.false_postive: 0
net.inet.tcp.bbr.policer.from_rack_rxt: 0
net.inet.tcp.bbr.policer.bwratio: 8
net.inet.tcp.bbr.policer.bwdiff: 500
net.inet.tcp.bbr.policer.min_pes: 4
net.inet.tcp.bbr.policer.detect_enable: 1
net.inet.tcp.bbr.minrto: 30
net.inet.tcp.bbr.timeout.rxtmark_sackpassed: 0
net.inet.tcp.bbr.timeout.incr_tmrs: 1
net.inet.tcp.bbr.timeout.pktdelay: 1000
net.inet.tcp.bbr.timeout.minto: 1000
net.inet.tcp.bbr.timeout.tlp_retry: 2
net.inet.tcp.bbr.timeout.maxrto: 4
net.inet.tcp.bbr.timeout.tlp_dack_time: 200000
net.inet.tcp.bbr.timeout.tlp_minto: 10000
net.inet.tcp.bbr.timeout.persmax: 1000000
net.inet.tcp.bbr.timeout.persmin: 250000
net.inet.tcp.bbr.timeout.tlp_uses: 3
net.inet.tcp.bbr.timeout.delack: 100000
net.inet.tcp.bbr.cwnd.drop_limit: 0
net.inet.tcp.bbr.cwnd.target_is_unit: 0
net.inet.tcp.bbr.cwnd.red_mul: 1
net.inet.tcp.bbr.cwnd.red_div: 2
net.inet.tcp.bbr.cwnd.red_growslow: 1
net.inet.tcp.bbr.cwnd.red_scale: 20000
net.inet.tcp.bbr.cwnd.do_loss_red: 600
net.inet.tcp.bbr.cwnd.initwin: 10
net.inet.tcp.bbr.cwnd.lowspeed_min: 4
net.inet.tcp.bbr.cwnd.highspeed_min: 12
net.inet.tcp.bbr.cwnd.max_target_limit: 8
net.inet.tcp.bbr.cwnd.may_shrink: 0
net.inet.tcp.bbr.cwnd.tar_rtt: 0
net.inet.tcp.bbr.startup.loss_exit: 1
net.inet.tcp.bbr.startup.low_gain: 25
net.inet.tcp.bbr.startup.gain: 25
net.inet.tcp.bbr.startup.use_lowerpg: 1
net.inet.tcp.bbr.startup.loss_threshold: 2000
net.inet.tcp.bbr.startup.cheat_iwnd: 1
net.inet.tcp.bbr.states.google_exit_loss: 1
net.inet.tcp.bbr.states.google_gets_earlyout: 1
net.inet.tcp.bbr.states.use_cwnd_maindrain: 1
net.inet.tcp.bbr.states.use_cwnd_subdrain: 1
net.inet.tcp.bbr.states.subdrain_applimited: 1
net.inet.tcp.bbr.states.dr_filter_life: 8
net.inet.tcp.bbr.states.rand_ot_disc: 50
net.inet.tcp.bbr.states.ld_mul: 4
net.inet.tcp.bbr.states.ld_div: 5
net.inet.tcp.bbr.states.gain_extra_time: 1
net.inet.tcp.bbr.states.gain_2_target: 1
net.inet.tcp.bbr.states.drain_2_target: 1
net.inet.tcp.bbr.states.drain_floor: 88
net.inet.tcp.bbr.states.startup_rtt_gain: 0
net.inet.tcp.bbr.states.use_pkt_epoch: 0
net.inet.tcp.bbr.states.idle_restart_threshold: 100000
net.inet.tcp.bbr.states.idle_restart: 0
net.inet.tcp.bbr.measure.noretran: 0
net.inet.tcp.bbr.measure.quanta: 3
net.inet.tcp.bbr.measure.min_measure_before_pace: 4
net.inet.tcp.bbr.measure.min_measure_good_bw: 1
net.inet.tcp.bbr.measure.ts_delta_percent: 150
net.inet.tcp.bbr.measure.ts_peer_delta: 20
net.inet.tcp.bbr.measure.ts_delta: 20000
net.inet.tcp.bbr.measure.ts_can_raise: 0
net.inet.tcp.bbr.measure.ts_limiting: 1
net.inet.tcp.bbr.measure.use_google: 1
net.inet.tcp.bbr.measure.no_sack_needed: 0
net.inet.tcp.bbr.measure.min_i_bw: 62500
net.inet.tcp.bbr.pacing.srtt_div: 2
net.inet.tcp.bbr.pacing.srtt_mul: 1
net.inet.tcp.bbr.pacing.seg_divisor: 1000
net.inet.tcp.bbr.pacing.utter_max: 0
net.inet.tcp.bbr.pacing.seg_floor: 1
net.inet.tcp.bbr.pacing.seg_tso_max: 2
net.inet.tcp.bbr.pacing.tso_min: 1460
net.inet.tcp.bbr.pacing.all_get_min: 0
net.inet.tcp.bbr.pacing.google_discount: 10
net.inet.tcp.bbr.pacing.tcp_oh: 1
net.inet.tcp.bbr.pacing.ip_oh: 1
net.inet.tcp.bbr.pacing.enet_oh: 0
net.inet.tcp.bbr.pacing.seg_deltarg: 7000
net.inet.tcp.bbr.pacing.bw_cross: 2896000
net.inet.tcp.bbr.pacing.hw_pacing_delay_cnt: 10
net.inet.tcp.bbr.pacing.hw_pacing_floor: 1
net.inet.tcp.bbr.pacing.hw_pacing_adj: 2
net.inet.tcp.bbr.pacing.hw_pacing_limit: 8000
net.inet.tcp.bbr.pacing.hw_pacing: 0
net.inet.tcp.bbr.probertt.can_use_ts: 1
net.inet.tcp.bbr.probertt.use_cwnd: 1
net.inet.tcp.bbr.probertt.is_ratio: 0
net.inet.tcp.bbr.probertt.can_adjust: 1
net.inet.tcp.bbr.probertt.enter_sets_force: 0
net.inet.tcp.bbr.probertt.can_force: 0
net.inet.tcp.bbr.probertt.drain_rtt: 3
net.inet.tcp.bbr.probertt.filter_len_sec: 6
net.inet.tcp.bbr.probertt.mintime: 200000
net.inet.tcp.bbr.probertt.int: 4000000
net.inet.tcp.bbr.probertt.cwnd: 4
net.inet.tcp.bbr.probertt.gain: 192
bbr                             * bbr                              5
net.inet.tcp.functions_default: bbr
        value:  /boot/kernel/tcp_bbr.ko
=========================================================




I would like to know how to build from their release and just add the options at this point
https://github.com/opnsense/tools

I see their kernel configuration is at https://github.com/opnsense/tools/tree/master/config/22.1 but I'm not sure if I have to grab more than their kernel config "SMP" or what have you to rebuild the kernel and how do I transport it from my development FreeBSD VM over to my OPNsense VM when the time comes after 22.1 has been released?

My current OPNsense shows they are using /usr/obj/usr/src/amd64.amd64/sys/SMP which is the file in https://github.com/opnsense/tools/tree/master/config/22.1 (or 21.7 in my case).

root@OPNsense:~ # uname -a
FreeBSD OPNsense.localdomain 12.1-RELEASE-p21-HBSD FreeBSD 12.1-RELEASE-p21-HBSD #0  04bde01a034(stable/21.7)-dirty: Mon Dec 13 09:07:56 CET 2021     root@sensey:/usr/obj/usr/src/amd64.amd64/sys/SMP  amd64
root@OPNsense:~ #

So like to know how to rebuild the OPNsense kernel and move it over to replace the current one?


#14

I emailed the project directly but they said they don't have a lot of resource to look at this but perhaps I can help contribute this change if we can vet it for the OPNsense project.....


This is what I wrote to the project@opensense.org email
_____________________
With OPNsense moving to FreeBSD 13.1 in the new year can you look at enabling TCP BBR congestion control kernel module in the OPNsense project so that people can rate limit their connections using BBR?

Apparently this option in Linux has had great results in people being able to get their full Gigabit internet upload and download speeds because the network interface floods the ONT interface otherwise and Linux users have been using BBR to allow the full data rates.

I use OPNsense having previously used pfSense on FreeBSD 11.x base and that project hasn't kept up with the latest FreeBSD changes.  I don't want to switch to a linux firewall project because they don't have an easy to use interface as seen in pfSense and OPNsense.

I don't think it is anything more than turning some default options and make options in the kernel config to enable the ability to use it as it still requires a sysctl option to turn on     

Just like to avoid having to redo this work if I get it to work every time there is a new release of OPNsense

Perhaps if I get it working after the 22.1 release is out I can provide the steps to OPNsense project for inclusion if it is a simple step and the OPNsense GUI can be enhanced later to allow end user selection of the congestion control algorithm including FQ codel stuff

Can you provide your kernel config file for the 22.1 FreeBSD target?

This is the page referencing TCP BBR for FreeBSD FYI.    I understand it is just these added to kernel configuration file at build but let me test and get back to project

makeoptions WITH_EXTRA_TCP_STACKS=1
options TCPHPTS

https://lists.freebsd.org/pipermail/freebsd-current/2020-April/075930.html

If you can point me to the appropriate forum/relevant section of forum I can also report back my results via the forum and link back to your project when it is proven


As far as real kernel modules I know someone in another forum that developed some changes to the bxe driver to allow syncing at 2.5Gbps but not sure if he submitted upstream to FreeBSD and Linux community (bnx2x) but that's relatively easy to build at different kernel releases.   I can approach FreeBSD project about incorporating those changes


#15
ok then how can I throttle the upload to 1000Mbps like they are doing with Linux as it seems the BBR is negligible or just doing nothing?

I've tried to use pipes/queues/rules but has a detrimental effect.   The FQ_CODEL doesn't seem to help.   

I'm trying to understand why my desktop which is connected through switch to my OPNsense gets 930/660 with a 2.5Gbps sync on the GPON module in my broadcom bxe/bnx2x but when its a 1Gbps sync on the GPON module in my broadcom bxe/bnx2x I get 700/930.

The project I'm referring to is on dslreports.com in the Bell Canada forums where people are syncing the card at 2.5Gbps to get the full 1.5+Gbps/1Gbps and passing it onto their 10Gbps LAN if they have it.  In my case I'm trying to get 1G/1G at this point before I explore that.

The folks in the dslreports.com form an a private slack forum seem to think that we are flooding the ONT on upload and hence why they are trying to use BBR and throttling the egress traffic on the Broadcom card up to the GPON network.