Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - bbin

#1
General Discussion / Re: Move to 14.1?
May 13, 2024, 11:46:31 AM
Quote from: franco on May 13, 2024, 11:33:31 AM
> FreeBSD 14.0 was released in November 2023.

FWIW, this is just a fact. If we act on release schedules by third parties we can't maintain our own schedules. If we don't look at quality of releases either we run the risk of complaints more than "why haven't you XYZ" as it ends up as "why have you XYZ" much more loudly ;)

Also keep in mind that when comparing to other projects they tend to market everything they did better as sensational, but don't really tell you they avoided FreeBSD 13 with all of its benefits and haven't really put an effort into backporting their changes into this stable version either so nobody who uses FreeBSD 13 can benefit from it in the interrim... which would have been a more standard FreeBSD release engineering policy. But all of this is what it is and we will reach an acceptable goal for ourselves eventually.

Totally fair.  :)  I'd expect the project team is thinking through the right balance.
#2
General Discussion / Re: Move to 14.1?
May 13, 2024, 11:09:13 AM
Quote from: Patrick M. Hausen on May 13, 2024, 12:48:57 AM
Quote from: JasonJoel on May 13, 2024, 12:37:31 AM
I genuinely hope NOT! This is supposed to be a security platform, not a bleeding edge / use fresh untested code platform.

I mean, they can do what they want of course, but I definitely would not install the 24.7 release if it is FreeBSD 14.1 based. Too new for my tastes.
Seriously? I run FreeBSD 14.0 in (not OPNsense) production so even with the "never run a .0 release" recommendation 14.1 looks like a very reliable bet to me. I sincerely hope the team around Franco and Ad jump to 14.1.

Kind regards,
Patrick

I would tend to agree with Patrick.  FreeBSD 14.0 was released in November 2023.  Some components have been backported into opnsense already.  The code seems relatively stable, and there are noticable performance improvements.

It would be one thing if I were suggesting moving to the FreeBSD 15 codebase, but 14 seems fine so far.  The pfsense people have been running on the 14 codebase for a while, so we have some data from their efforts.  I recall franco suggesting that 14.1 could be an option at some point.  I for one would be happy to see the next release based on 14.1

Quote from: hazuki on May 13, 2024, 12:31:40 AM
One may test FreeBSD 14 kernel in OPNSense after selecting the snapshot

opnsense-update -zkbr 14-STABLE -a FreeBSD:14:amd64

WARNING:
1)ISC DHCPv4 (and probably ISC DHCPv6) will fail to start (at least in my setup) after applying the FreeBSD 14 kernel, which is due to [object "libcrypto.so.111" not found]. One should migrate to Kea DHCP before test.
2) In relation to [object "libcrypto.so.111" not found], "pkg" command and any binary in relation to libcrypto.so also not working. "pkg-static bootstrap -f" will not solve the problem, as basically all required binary/packages under FreeBSD 14 kernel is not available in OPNSense repo.

Test FreeBSD 14 kernel at your own risk!

This FreeBSD 14.1 kernel indeed boost up wireguard speed.
More information are in https://forum.opnsense.org/index.php?topic=40413.msg198242#msg198242

EDIT 1: add missing -b switch in code. Solely updating kernel without base file will lead to messed-up routing.
EDIT 2: add warning regarding failed ISC DHCPv4
EDIT 3: add warning regarding failed pkg command

I thought about testing on the new kernel, but as you point out base and ports/pkg haven't been updated in line with the kernel yet.

Quote from: hazuki on May 13, 2024, 11:08:06 AM
I am not sure if I am looking into the right place. In https://github.com/opnsense/src/blob/volatile/24.7/sys/conf/newvers.sh , looks like 24.7 will be in FreeBSD 14.1.

TYPE="FreeBSD"
REVISION="14.1"
BRANCH="BETA1"


If this is true, I really looking forward to test the BETA 24.7 as the performance gain in wireguard is really astonishing.

+1 for me as well.  :)
#3
General Discussion / Move to 14.1?
May 08, 2024, 05:46:28 PM
I recall seeing another post about this a while back, I couldn't find it with the forum search function.

I just saw that FreeBSD 14.1 beta 1 was released, and it's (currently) on target for a June launch.  The roadmap for the next release currently shows refactoring toward the FreeBSD 13.3 codebase.  Would there be any possibility of moving toward 14.1 this summer?  Between some of the updated Intel drivers, network/wireguard performance enhancements, etc I would expect there would be some tangible benefits.
#4
23.7 Legacy Series / Re: Unbound crashing
August 24, 2023, 03:27:11 PM
So good news, bad news.  The bad news is that unbound is still failing after roughly 30 minutes.  The good news is that the patch allows me to restart the service without rebooting.
#5
I'm running 23.7.2 on a brand new Protectli VP4670.  After a clean reboot, things seem to be running fine for roughly 30 minutes, after which unbound seems to be unable to resolve anything.  When I take a look at the logs, I see entries similiar to this:

2023-08-23T22:14:59-05:00   Error   unbound   [72972:2] error: SERVFAIL <connectivitycheck.gstatic.com. AAAA IN>: all the configured stub or forward servers failed, at zone . no server to query nameserver addresses not usable have no nameserver names

I was using DOT with cloudflare, but have also just switched to regular DNS resolution using system resolvers and get the same result.  After rebooting, unbound seems to be able to resolve again.

Has anyone else run into this?  franco or others with the project - any ideas?
#6
Some thoughts/suggestions:

With the way I tend to use dnsbl, it would be ideal to have the dnsbl selection be configured on a per bl basis, with the ability to target/exempt the bl based upon IP, range, network, or alias.  The adlist targeting in pihole provides a great example here; in pihole, you create groups in the "Clients" module and then can target adlists using the "Group assignment" function.

My current configuration uses pihole as the DNS resolver that is handed out to DHCP clients, and opnsense unbound is used as the upstream DNS server for pihole.  If I had the ability to easily add custom blocklists and target blocklists individually, I would get rid of my pihole.
#7
Looks like wireguard was just committed to the FreeBSD kernel.

https://www.phoronix.com/news/FreeBSD-WireGuard-Lands-2022

What are the current plans for incorporating into opnsense?
#8
Hi all,

Was looking into ways to more easily configure my network to use both opnsense and a pihole for DNS filtering.  The pihole developers wrote up a guide using dnsmasq's edns client subnet support to pass IP information from opnsense to the pihole DNS resolver.  Reading through the man pages for unbound.conf, this appears to be possible, but opnsense configd doesn't appear to have support through the UI to enable or configure edns client subnet support in unbound.  franco or others on the team - is this something you've explored?

Pihole guide for client support with opnsense: https://pi-hole.net/2021/09/30/pi-hole-and-opnsense/#page-content

From the unbound.conf man page:

EDNS Client Subnet Module Options
       The ECS module must be configured in the module-config: "subnetcache
       validator iterator" directive and be compiled into the daemon to be
       enabled.  These settings go in the server: section.

       If the destination address is allowed in the configuration Unbound will
       add the EDNS0 option to the query containing the relevant part of the
       client's address.  When an answer contains the ECS option the response
       and the option are placed in a specialized cache. If the authority
       indicated no support, the response is stored in the regular cache.

       Additionally, when a client includes the option in its queries, Unbound
       will forward the option when sending the query to addresses that are
       explicitly allowed in the configuration using send-client-subnet. The
       option will always be forwarded, regardless the allowed addresses, if
       client-subnet-always-forward is set to yes. In this case the lookup in
       the regular cache is skipped.

       The maximum size of the ECS cache is controlled by 'msg-cache-size' in
       the configuration file. On top of that, for each query only 100
       different subnets are allowed to be stored for each address family.
       Exceeding that number, older entries will be purged from cache.

       send-client-subnet: <IP address>
              Send client source address to this authority. Append /num to
              indicate a classless delegation netblock, for example like
              10.2.3.4/24 or 2001::11/64. Can be given multiple times.
              Authorities not listed will not receive edns-subnet information,
             unless domain in query is specified in client-subnet-zone.

       client-subnet-zone: <domain>
              Send client source address in queries for this domain and its
              subdomains. Can be given multiple times. Zones not listed will
              not receive edns-subnet information, unless hosted by authority
              specified in send-client-subnet.

       client-subnet-always-forward: <yes or no>
              Specify whether the ECS address check (configured using
              send-client-subnet) is applied for all queries, even if the
              triggering query contains an ECS record, or only for queries for
              which the ECS record is generated using the querier address (and
              therefore did not contain ECS data in the client query). If
              enabled, the address check is skipped when the client query
              contains an ECS record. And the lookup in the regular cache is
              skipped.  Default is no.

       max-client-subnet-ipv6: <number>
              Specifies the maximum prefix length of the client source address
              we are willing to expose to third parties for IPv6.  Defaults to
              56.

       max-client-subnet-ipv4: <number>
              Specifies the maximum prefix length of the client source address
              we are willing to expose to third parties for IPv4. Defaults to
              24.

       min-client-subnet-ipv6: <number>
              Specifies the minimum prefix length of the IPv6 source mask we
              are willing to accept in queries. Shorter source masks result in
              REFUSED answers. Source mask of 0 is always accepted. Default is
              0.

       min-client-subnet-ipv4: <number>
              Specifies the minimum prefix length of the IPv4 source mask we
              are willing to accept in queries. Shorter source masks result in
              REFUSED answers. Source mask of 0 is always accepted. Default is
              0.

       max-ecs-tree-size-ipv4: <number>
              Specifies the maximum number of subnets ECS answers kept in the
              ECS radix tree.  This number applies for each qname/qclass/qtype
              tuple. Defaults to 100.

       max-ecs-tree-size-ipv6: <number>
              Specifies the maximum number of subnets ECS answers kept in the
              ECS radix tree.  This number applies for each qname/qclass/qtype
              tuple. Defaults to 100.
#9
Here's what I currently have set up.  If you see anything out of sort, please let me know.  I've tried a variety of different values on the download pipe for the queues and FQ-CoDel values and haven't found an optimal configuration yet.  I've also tried switching from newreno to HTCP to see if that will make any difference in tunables.

Pipes:
upload:
bandwidth 20mbps
no mask
scheduler FlowQueue-CoDel
(FQ-)CoDel ECN checked

download:
bandwidth 360 mbps
queues 2
no mask
scheduler FlowQueue-CoDel
FQ-CoDel quantum 1080
FQ-CoDel limit 1000

queues: * (FQ-)CoDel ECN checked on all queues
Upstream pipe   1   Upstream queue   mask source
Upstream pipe   10   Upstream high priority    mask source
Downstream pipe   1   Downstream queue    mask destination
Downstream pipe   10   Downstream high priority queue    mask destination

rules:
1   WAN   udp   <firewall and pihole IPs>   any   out   Upstream high priority   DNS High Priority   
2   WAN   tcp (ACK packets only)   any   any   out   Upstream high priority   Upload ACK   
3   WAN   ipv4   any   any   Upstream queue   out   Upstream
4   WAN   tcp (ACK packets only)   any   any   in   Downstream high priority queue   Downstream high priority   
5   WAN   ipv4   any   any   Downstream queue   in   Downstream

tunables:
debug.pfftpproxy   Disable the pf ftp proxy handler.   unsupported   unknown   
dev.igb.0.eee_disabled      unsupported   1   
dev.igb.1.eee_disabled      unsupported   1   
hw.igb.num_queues      unsupported   0   
hw.igb.rx_process_limit      unsupported   -1   
hw.igb.tx_process_limit      unsupported   -1   
hw.syscons.kbd_reboot   Disable CTRL+ALT+Delete reboot from keyboard.   runtime   default (0)   
kern.ipc.maxsockbuf   Maximum socket buffer size   runtime   default (4262144)   
kern.randompid   Randomize PID's (see src/sys/kern/kern_fork.c: sysctl_kern_randompid())   runtime   default (1)   
legal.intel_igb.license_ack      unsupported   1   
net.inet.icmp.drop_redirect   Redirect attacks are the purposeful mass-issuing of ICMP type 5 packets. In a normal network, redirects to the end stations should not be required. This option enables the NIC to drop all inbound ICMP redirect packets without returning a response.   runtime   1   
net.inet.icmp.icmplim   Set ICMP Limits   runtime   default (0)   
net.inet.icmp.log_redirect   This option turns off the logging of redirect packets because there is no limit and this could fill up your logs consuming your whole hard drive.   runtime   default (0)   
net.inet.ip.accept_sourceroute   Source routing is another way for an attacker to try to reach non-routable addresses behind your box. It can also be used to probe for information about your internal networks. These functions come enabled as part of the standard FreeBSD core system.   runtime   default (0)   
net.inet.ip.fastforwarding   IP Fastforwarding   unsupported   unknown   
net.inet.ip.intr_queue_maxlen   Maximum size of the IP input queue   runtime   default (1000)   
net.inet.ip.portrange.first   Set the ephemeral port range to be lower.   runtime   default (1024)   
net.inet.ip.random_id   Randomize the ID field in IP packets (default is 0: sequential IP IDs)   runtime   default (1)   
net.inet.ip.redirect   Enable sending IPv4 redirects   runtime   0   
net.inet.ip.sourceroute   Source routing is another way for an attacker to try to reach non-routable addresses behind your box. It can also be used to probe for information about your internal networks. These functions come enabled as part of the standard FreeBSD core system.   runtime   default (0)   
net.inet.tcp.blackhole   Drop packets to closed TCP ports without returning a RST   runtime   default (2)   
net.inet.tcp.cc.algorithm   Default congestion control algorithm   runtime   htcp   
net.inet.tcp.cc.htcp.adaptive_backoff      unsupported   1   
net.inet.tcp.cc.htcp.rtt_scaling      unsupported   1   
net.inet.tcp.delayed_ack   Do not delay ACK to try and piggyback it onto a data packet   runtime   default (0)   
net.inet.tcp.drop_synfin   Drop SYN-FIN packets (breaks RFC1379, but nobody uses it anyway)   runtime   default (1)   
net.inet.tcp.log_debug   Enable TCP extended debugging   runtime   default (0)   
net.inet.tcp.recvbuf_max   Max size of automatic receive buffer   runtime   4194304   
net.inet.tcp.recvspace   Maximum incoming/outgoing TCP datagram size (receive)   runtime   default (65228)   
net.inet.tcp.sendbuf_max   Max size of automatic send buffer   runtime   4194304   
net.inet.tcp.sendspace   Maximum incoming/outgoing TCP datagram size (send)   runtime   default (65228)   
net.inet.tcp.syncookies   Generate SYN cookies for outbound SYN-ACK packets   runtime   default (1)   
net.inet.tcp.tso   TCP Offload Engine   runtime   default (1)   
net.inet.udp.blackhole   Do not send ICMP port unreachable messages for closed UDP ports   runtime   default (1)   
#10
After a little more testing, it looks like the major hit came from rss being enabled.  I've disabled rss.  Still doing some tuning to figure out how to get the performance back where it should be.
#11
Update:

I've adjusted the pipes a bit and am seeing less latency, but they why isn't making much sense to me.

Dropped the bandwidth on the download pipe to 200mb (from 360).  Removed any queue, FQ-CoDel limit, FQ-CoDel flow values.  I'm getting a much more stable rating on the Waveform bufferbloat teast, but had to obviously give up a lot of bandwidth.  It also seems like downloads are going waaaay slower.
#12
I'm noticing huge throughput differences between development and production.  I had shaper configured to improve bufferbloat on a 400mb cable pipe from Spectrum.  Where I was previously getting ~350-350mb down/~20up I get ~50mb down/~20 up on dev.  I also had major issues with a Zoom last night where the video was buffering and dropping.

On the waveform bufferbloat test, I was previously getting +7ms down/+0 up with my shaper config on prod, I'm getting ~+26ms down/ ~+7ms up, and the bandwidth takes a nosedive.
#13
Zenarmor (Sensei) / Re: NTP misclassified as proxy
December 22, 2020, 03:44:45 PM
Thanks!

Found a few more examples.  Not sure if they're also covered by your update:

66.228.58.20
149.20.176.27
162.243.194.203
#14
Zenarmor (Sensei) / NTP misclassified as proxy
December 22, 2020, 05:06:45 AM
I'm noticing that ntp queries are being misclassified as proxy.

Example attached.
#15
General Discussion / Re: Remove Double-Nat help needed
August 08, 2020, 05:40:59 PM
I just came across this post, and was wondering the same thing.  Is this something that netgraph could accomplish?  It seems like several people are doing this with AT&T fiber and opnatt.