Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - TheLinuxGuy

#1
It's been a few years since I last attempted to run opnsense on arm - looks like things haven't changed and opnsense remains to be an x86/amd64 only platform.

Has there been an official stance about aarch64 by the opnsense devs? My nanopi R6S is great with OpenWrt but opnsense is a much better all-in-one platform to run for home firewall.
#2
Quote from: Nikotine on March 24, 2021, 11:44:52 PM
Quote from: spikerguy on March 24, 2021, 10:53:22 PM
You need to change the repo of kubsu to this

https://pkg.personalbsd.org/${ABI}-opnsense/21.1/latest

Sleepwalker have moved to his own repo. I will mirror it on my server soon so we have 2 mirrors for users.  Please donate to sleepwaker for all his work on
https://personalbsd.org/
And how do you change a repo?

https://forum.opnsense.org/index.php?topic=12186.0
#3
How's it going for those using NanoPi-R4S with OPNsense?

I just ordered this off amazon and will start to play with it soon. If OPNsense isn't stable I may use OpenWRT on it but wanted to ask the community if the updates/packages are truly broken on NanoPi-R4S platform or not
#4
Quote from: mimugmail on May 16, 2021, 01:46:57 PM
Interfaces : LAN : MSS, set to 1300.

This is exactly what I had configured and was having issues.

I ended up being able to implement a workaround.

Firewall > settings > Normalization

Added a rule:
- Interface "IPsec"
- source any
- dest any
- max MSS set to 1350

Restored LAN to have no MSS. So far its been stable for the past hour and I am uploading a large file.
#5
I'm having MTU issues (unable to load websites - dell remote management) over the IPsec tunnel. I have lowered the MTU and MSS settings on my LAN but still facing issues - if I reboot the opnsense it will work for a few minutes so it seems some traffic may respect MSS but then stops working.

pfsense seems to have special settings under IPsec for this condition per https://docs.netgate.com/pfsense/en/latest/vpn/ipsec/advanced.html

other opnsense users seem to have reported the same issue without resolution: https://forum.opnsense.org/index.php?topic=17881.0

any idea what can be done?
#6
I've been using OPNsense for several years, the product is great and has been rock solid for Cable modem ISPs. Several months ago I switched to a fully wireless 5G home internet ISP (T-mobile) which seems to be causing opnsense to randomly crash and services to stop working.

The pattern I have observed is the following:
- internet on LAN and other VLANs stop working completely
- opnsense Web UI indicates a crash and to report it (not super helpful - but have done several reports already)
- opnsense dashboard shows the following services as stopped or failed:
* Unbound DNS
* flowd_aggregate
* My OpenVPN clients are down
* All my gateway monitors are 'offline'

Sometimes restarting unbound is enough to bring everything back online - but that is only 1 out of 4 times. Most times I have to do a full reboot of opnsense to get things back online. I had tried to do 'reset' WAN interface from the interface status menus to no avail.

While I admit that my setup may be complex since I run opnsense as a VM on proxmox and then WAN connectivity to ISP equipment is connected directly to proxmox host to a bridge interface I use for clients I want to be directly connected to ISP.

Has anyone experienced or is it well known that if a WAN connection may be unstable there is odd behavior in opnsense to be expected?

To be clear, I have noticed the issue to be the ISP equipment (Nokia fastmile 5G router) which even when directly connected to it there are 'brief connection interruptions' of sometimes 30 seconds to a few minutes while the modem reconnects to other LTE/5G cell towers. But I would expect opnsense to be able to recover from this without a reboot - or perhaps I am missing something?

Restarting downed services do not fix this, restarting WAN interface via web UI does not fix it either. opnsense does get an IP but somehow its routing table to the internet just doesn't work until a reboot occurs.
#7
Quote from: Maurice on March 12, 2021, 10:19:25 PM
What about two gateway groups, one for IPv6 and one for IPv4?

This is what I ended up having to do, the only drawback is that my firewall rules which are IPv6/IPv4* can't have a single gateway. I had to split it into multiple rules for each IP protocol.

Wonder if this is worth a feature request or not
#8
I have two WAN interfaces and looking to setup failover - I have configured an IPv4 and IPv6 gateway per each WAN... so we have a total of 2 ISP/WAN and 4 gateways.

When trying to define "Tier 1" it looks like I can't select Tier 1 more than once - how can I ensure that both IPv4/IPv6 traffic both uses WAN1 until it fails then defaults to WAN2?

Maybe I am confused because each WAN link has a gateway monitor for each IP protocol but I think what I am trying to achieve is simple. I did try to create "a gateway group of another group" but alas that isn't possible lol
#9
Quote from: mimugmail on March 02, 2021, 06:11:46 AM
But its limited to 1G only

thanks. I did switch my VM settings to E1000 - performance bottleneck seems to continue to exist.

When I run iperf3 on the debian VM - the first downloads come at double the speed where-as in pf and opnsense the downloads begin at 30mbps and slowly build up to 80-120mbps. The linux box gets to these speeds within 2-3 seconds.
#10
Quote from: Voodoo on March 01, 2021, 06:09:21 PM
Edit: nevermind didn't read, but virtio support on bsd is lacking, I think that's the issue.

This made me realize a possible bottleneck. I can try to add e1000 interfaces to both pfsense and opnsense VMs. Swap the configuration and re-test.

Would E1000 driver on BSD be the best performant on proxmox/QEMU/KVM?
#11
Quote from: mimugmail on March 01, 2021, 05:16:17 PM
What hardware does this run on? On XEON I can achieve 1,9Gbit in both directions.
Sadly Wireguard runs bad on low-priced hardware.

All tests were from a virtualized pfsense, opnsense and debian 10 box. The VM specs for pf and opn are exactly the same - 2GB memory, 4 cores, 32gb disk, AES CPU passthru enabled.

The parent host is a intel xeon 24-core e5-2697 v2 - 128gb ddr3 on promox.

I was expecting the performance to be close to the same, the only tweaks that Linux has that FreeBSD does not is that TCP BBR is setup and I also did these tweaks on linux (not sure if there are similar for BSD)

# They optimize the server networking protocols
echo '* soft nofile 51200' >> /etc/security/limits.conf
echo '* hard nofile 51200' >> /etc/security/limits.conf
ulimit -n 51200
echo 'fs.file-max = 51200
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.core.netdev_max_backlog = 250000
net.core.somaxconn = 4096
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_mem = 25600 51200 102400
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_congestion_control = hybla' > /etc/sysctl.conf
sysctl -p

modprobe tcp_bbr
sh -c 'echo "tcp_bbr" >> /etc/modules-load.d/modules.conf'
sh -c 'echo "net.core.default_qdisc=fq" >> /etc/sysctl.conf'
sh -c 'echo "net.ipv4.tcp_congestion_control=bbr" >> /etc/sysctl.conf'
lsmod | grep bbr
#12
Curious if it may be known that wireguard is almost 2x or 3x faster than FreeBSD implementation?

I have been benchmarking wireguard performance to a local datacenter VPS that will become my gateway soon (I have 5G internet at home and CGNAT).

I setup wireguard clients on 3 platforms and did the same tests.

Debian 10 (wireguard kernel mod):
-- downloading data from VPS to home

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.03  sec   453 MBytes   127 Mbits/sec   49             sender
[  5]   0.00-30.00  sec   442 MBytes   124 Mbits/sec                  receiver


-- uploading
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  24.1 MBytes  6.75 Mbits/sec   19             sender
[  5]   0.00-30.05  sec  23.9 MBytes  6.68 Mbits/sec                  receiver



pfsense 2.5.0 CE (built-in wireguard in kernel according to them):
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.25  sec   176 MBytes  48.7 Mbits/sec    1             sender
[  5]   0.00-30.00  sec   170 MBytes  47.5 Mbits/sec                  receiver

iperf Done.
[2.5.0-RELEASE][root@pfSense.pf]/root:

--upload
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  24.6 MBytes  6.89 Mbits/sec   20             sender
[  5]   0.00-30.26  sec  23.7 MBytes  6.56 Mbits/sec                  receiver


opnsense 21.1 (go-wireguard plugin):
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.25  sec   104 MBytes  28.9 Mbits/sec  1860             sender
[  5]   0.00-30.00  sec   101 MBytes  28.4 Mbits/sec                  receiver

-- upload
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  15.4 MBytes  4.32 Mbits/sec   22             sender
[  5]   0.00-30.26  sec  15.1 MBytes  4.19 Mbits/sec                  receiver


iperf3 without tunnel from opnsense:
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.28  sec   352 MBytes  97.5 Mbits/sec  999             sender
[  5]   0.00-30.00  sec   343 MBytes  96.0 Mbits/sec                  receiver

-- upload
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  13.4 MBytes  3.74 Mbits/sec   34             sender
[  5]   0.00-30.26  sec  13.3 MBytes  3.68 Mbits/sec                  receiver


For good measure, speed test from debian 10 box outside tunnel:
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.08  sec   321 MBytes  89.6 Mbits/sec  1278             sender
[  5]   0.00-30.00  sec   311 MBytes  87.1 Mbits/sec                  receiver

-- upload
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  31.0 MBytes  8.67 Mbits/sec   33             sender
[  5]   0.00-30.04  sec  30.8 MBytes  8.59 Mbits/sec                  receiver


speedtest.net results https://www.speedtest.net/my-result/d/f2d48b53-2bfe-4ed8-b76e-9c2fecdd8137

I really wish that FreeBSD had similar performance to the Debian 10 VM - the setup is identical for all wireguard clients. Same ISP network router is upstream, connecting via IPv6 to the VPS server (due to my ISP being IPv6 only network its best to avoid IPv4 hosts direct connects due to CGNAT). I may end up using this debian box as secondary WAN for opnsense so that I can keep the performance gains for my traffic.
#13
I have been experimenting with BBR congestion algorithm from Google. It's widely available in Linux. Apparently FreeBSD may have had it committed in: https://reviews.freebsd.org/rS352657 (I am not sure how to check)

*edit: https://svnweb.freebsd.org/base?view=revision&revision=352657 * perhaps the feature request for OPNsense is to enable the build flags mentioned here?
QuoteThis
is a completely separate TCP stack (tcp_bbr.ko) that will be built only if
you add the make options WITH_EXTRA_TCP_STACKS=1 and also include the option
TCPHPTS

In linux I have seen an improved performance with it and wanted to enable it for my OPNsense at home, does anyone have any guidance on how TCP BRR congestion algorithm is enabled for use in the system?

Found this but it isn't relevant to opnsense (https://fasterdata.es.net/host-tuning/freebsd/)

[root@fw ~]# ls /boot/kernel/cc_* | grep -v symbols
/boot/kernel/cc_cdg.ko
/boot/kernel/cc_chd.ko
/boot/kernel/cc_cubic.ko
/boot/kernel/cc_dctcp.ko
/boot/kernel/cc_hd.ko
/boot/kernel/cc_htcp.ko
/boot/kernel/cc_vegas.ko


If BRR is news to you, some references:
https://github.com/google/bbr
https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-control-84c9c11dc3a9
https://www.cyberciti.biz/cloud-computing/increase-your-linux-server-internet-speed-with-tcp-bbr-congestion-control/
#14
Quote from: Maurice on February 28, 2021, 08:33:20 PM
Router Advertisements are required. "Assisted" is a good default choice. If it works you can optimise later.

Also, you might want to limit the source in the NAT rule to LAN net.

Thanks so much for the tips and help here, it works!

On a simple opnsense WAN+LAN setup, in order to get IPv6 from ISP to work in LAN following the steps

Quote from: TheLinuxGuy on February 28, 2021, 07:37:19 PM

Interfaces config : LAN
- Static IPv6
- IPv6 address: "fdde:5453:540e:ff12::" and 64
click save

Services : DHCPv6 LAN
- Range start
fdde:5453:540e:ff12::
- Range end
fdde:5453:540e:ff12:ffff:ffff:ffff:ffff
save & restart service

Firewall: NAT : outbound
- Set Hybrid outbound
- Add manual rule
interface WAN
TCP/IP version 6
protocol any
source LAN
destination any
translation target WAN address
log enabled
save


The Enable Router Advertisements in LAN, to "Assisted" solved it.
#15
Quote from: Maurice on February 28, 2021, 07:46:44 PM
Using native IPv6 with multiple LANs and without an available prefix larger than /64 is indeed impossible without NAT.


ACK. I noticed this in other threads that I was reading on the subject - this is why I was thinking of maybe only have 1 VLAN have IPv6 enabled.

Right now all my VLANs have IPv4 only - I am trying to sort out what I need to do to to get IPv6 to work on this opnsense blank slate/testbox before I touch my production opnsense install that is working perfectly with just IPv4.

Quote from: Maurice on February 28, 2021, 07:46:44 PM
The LAN interface identifier should not be zero, that's a reserved anycast address. Better use fdde:5453:540e:ff12::1.

Thanks for this - LAN IPv6 set to fdde:5453:540e:ff12::1 - adjusted DHCP scope to account for start range ::2

Quote from: Maurice on February 28, 2021, 07:46:44 PM
How did you configure Router Advertisements?
Is there a firewall rule on the the LAN interface passing IPv6?
Also, be aware that clients will always prefer IPv4 over IPv6 when using ULAs. Just one of the limitations of IPv6 NAT.

Router advertisements are 'disabled' on LAN by default, are settings on it needed to make this work?

Presuming a setting here is needed - would "Assisted" for Stateful DHCPv6 and SLAAC (M+O+A flags) be ideal? any hints on any other settings is appreciated.

LAN firewall rules (recall this is a fresh install test box) do have an IPv6 rule that allows any.