Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - slykens

#1
It can't.

I asked about this a year ago and while it seems the ability to bind Wireguard to an interface is present in the kernel in FreeBSD 14, the functionality doesn't exist in OPNsense for it.

In my use case, I switched to IPsec and while it has its own difficulty in configuration in OPNsense, has been quite reliable for the desired purpose.
#2
Quote from: cloudz on August 19, 2024, 11:47:11 AM
Quote from: slykens on August 19, 2024, 04:56:54 AM
I too have this problem but in my case it appears to affect only instances that were upgraded to 24.7.

I have a fresh 24.7 instance subsequently upgraded to 24.7.1 that performs perfectly in the same scenario as an upgraded instance that required the kernel roll-back to perform properly.

That is weird -- also a clean config? Or do all devices have a similar config?

The 24.7.1 that was a fresh install of 24.7 was a clean config and it does not have problems.

The 24.7.1 that did not work was installed last fall and upgraded as new releases came out.
#3
I too have this problem but in my case it appears to affect only instances that were upgraded to 24.7.

I have a fresh 24.7 instance subsequently upgraded to 24.7.1 that performs perfectly in the same scenario as an upgraded instance that required the kernel roll-back to perform properly.
#4
You can't just bind to a different port because it will route to the current default gateway anyway - wireguard binds to *.12345, if 12345 were the port. Even using a firewall rule to try to direct the traffic to another gateway is hit and miss.

I'm running tunnels between sites, with BFD and BGP running over them, for quick recovery.

An example topology is Site A with WAN 1 and WAN 2 and Site B with WAN 1. I want two wireguard tunnels up at all times - one each from Site A/WAN 1 and Site A/WAN 2 to Site B/WAN 1. In this scenario, running BFD/BGP, connectivity recovers in a second or two rather than 15-30 seconds. A fast recovery is necessary as there is streaming video over these tunnels with limited buffers on each end.

Now I'm running two opnsense instances at Site A and letting them peer to get the desired behavior. It doesn't "cost" me anything, I have plenty of resources on the vm host there but it would be nice to consolidate into one instance when 24.7 comes out.
#5
To potentially add a data point here, I have a few opnsense systems on Comcast in my area.

My home opnsense has this problem - it acts like it loses the ipv6 address on the WAN but ipv6 still works EXCEPT for tunnels. I have multiple inside interfaces tracking WAN. I get about 24-30 hours before it happens.

Another opnsense in the same area also on Comcast only has one internal interface tracking WAN and it has worked perfectly since being upgraded four days ago.
#6
Hello -

Wondering if being able to bind wireguard instances to interfaces is on the roadmap for future 24.7?

I understand FreeBSD 14 is required for this so was hoping it would be there. An install of the beta reveals it is not present yet.

For those of us with dual-WAN the ability to bind an instance to an interface will greatly simplify configuring backup tunnels - right now I have a second OPN instance on my vm host to connect to my backup internet so I can have separate backup wireguard tunnels and use dynamic routing to manage them.
#7
Hello All -

I started having a lot of performance and reliability problems with my Zerotier network with 24.1.4. Because of this I built an IPsec mesh to operate alongside and provide failover for the Zerotier network. This had worked pretty well with bfd and bgp in the mix to failover properly.

Now with 24.1.5_3 I've got very unpredictable IPsec behavior - on at least two of four nodes in my network it seems to be passing all interface IPs to the other side which causes the IKE SA to be renegotiated from internal IPv4s or external IPv4 even though all tunnels are configured for IPv6 only - this renegotiation seems to break the tunnels. These are route-based tunnels configured through the Tunnel Settings UI. (Is that the problem?)

I feel like I'm taking crazy pills trying to diagnose this. Nothing in the logs makes it clear what's going on - logs will show IPv6 conversation then suddenly it builds a new IKE SA on IPv4 with addresses that are not configured in any way for the tunnels. (For example, one IKE SA switched from public IPv6 addresses to 10.15.1.1 -- 10.250.0.11 which are both random internal addresses from each side)

I'm hoping for some ideas or guidance on where to try to starting figuring this one out. Thanks,
#8
24.1, 24.4 Legacy Series / Kernel panic
February 15, 2024, 04:30:00 PM
Hello all -

On 24.1.1 this morning I had a kernel panic - this instance runs in a VMware guest, no issues with other guests or the host.

I have been using opnsense in a few places since the great pf exodus months ago and this is the first crash I've had. Maybe it's a one off thing but I figured I'd mention it here in case others see the same

I believe the pertinent logs are below:

<45>1 2024-02-15T10:02:09-05:00 108-gateway. syslog-ng 28383 - [meta sequenceId="1"] syslog-ng starting up; version='4.6.0'
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="2"] Fatal trap 12: page fault while in kernel mode
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="3"] cpuid = 2; apic id = 04
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="4"] fault virtual address = 0x28
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="5"] fault code = supervisor read data, page not present
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="6"] instruction pointer = 0x20:0xffffffff80e93473
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="7"] stack pointer         = 0x28:0xfffffe00aa5b8940
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="8"] frame pointer         = 0x28:0xfffffe00aa5b8990
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="9"] code segment = base 0x0, limit 0xfffff, type 0x1b
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="10"] = DPL 0, pres 1, long 1, def32 0, gran 1
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="11"] processor eflags = interrupt enabled, resume, IOPL = 0
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="12"] current process = 64366 (ifconfig)
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="13"] trap number = 12
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="14"] panic: page fault
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="15"] cpuid = 2
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="16"] time = 1708009294
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="17"] KDB: stack backtrace:
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="18"] db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00aa5b8700
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="19"] vpanic() at vpanic+0x151/frame 0xfffffe00aa5b8750
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="20"] panic() at panic+0x43/frame 0xfffffe00aa5b87b0
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="21"] trap_fatal() at trap_fatal+0x387/frame 0xfffffe00aa5b8810
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="22"] trap_pfault() at trap_pfault+0x4f/frame 0xfffffe00aa5b8870
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="23"] calltrap() at calltrap+0x8/frame 0xfffffe00aa5b8870
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="24"] --- trap 0xc, rip = 0xffffffff80e93473, rsp = 0xfffffe00aa5b8940, rbp = 0xfffffe00aa5b8990 ---
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="25"] in6_unlink_ifa() at in6_unlink_ifa+0x63/frame 0xfffffe00aa5b8990
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="26"] in6_purgeaddr() at in6_purgeaddr+0x367/frame 0xfffffe00aa5b8ab0
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="27"] in6_purgeifaddr() at in6_purgeifaddr+0x13/frame 0xfffffe00aa5b8ad0
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="28"] in6_control() at in6_control+0x5f7/frame 0xfffffe00aa5b8bc0
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="29"] ifioctl() at ifioctl+0x7bc/frame 0xfffffe00aa5b8cc0
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="30"] kern_ioctl() at kern_ioctl+0x26d/frame 0xfffffe00aa5b8d30
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="31"] sys_ioctl() at sys_ioctl+0x100/frame 0xfffffe00aa5b8e00
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="32"] amd64_syscall() at amd64_syscall+0x10c/frame 0xfffffe00aa5b8f30
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="33"] fast_syscall_common() at fast_syscall_common+0xf8/frame 0xfffffe00aa5b8f30
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="34"] --- syscall (54, FreeBSD ELF64, ioctl), rip = 0x75e83cf61ca, rsp = 0x75e807bbc58, rbp = 0x75e807bbcb0 ---
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="35"] KDB: enter: panic
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="36"] Uptime: 7d17h53m56s
<13>1 2024-02-15T10:02:09-05:00 108-gateway. kernel - - [meta sequenceId="37"] ---<<BOOT>>---
#9
Hello... another pfsense refugee here.

Still working on getting everything working how I want and tonight's project was wrangling haproxy. I am having a problem with https redirect so I followed the tutorial in this thread with no success.

When an https client hits haproxy, it works as expected.

When an http client hits haproxy, I get the following error in the haproxy log:

ssl_redirect/[::]:80: Received something which does not look like a PROXY protocol header

This is my present config export:

#
# Automatically generated configuration.
# Do not edit this file manually.
#

global
    uid                         80
    gid                         80
    chroot                      /var/haproxy
    daemon
    stats                       socket /var/run/haproxy.socket group proxy mode 775 level admin
    nbthread                    1
    hard-stop-after             60s
    no strict-limits
    tune.ssl.default-dh-param   2048
    spread-checks               2
    tune.bufsize                16384
    tune.lua.maxmem             0
    log                         /var/run/log local0 info
    lua-prepend-path            /tmp/haproxy/lua/?.lua

defaults
    log     global
    option redispatch -1
    timeout client 30s
    timeout connect 30s
    timeout server 30s
    retries 3
    default-server init-addr last,libc

# autogenerated entries for ACLs


# autogenerated entries for config in backends/frontends

# autogenerated entries for stats




# Frontend: https ()
frontend https
    bind 0.0.0.0:443 name 0.0.0.0:443 ssl alpn h2,http/1.1 crt-list /tmp/haproxy/ssl/6554226ca7c6c4.56456894.certlist
    bind [::]:443 name [::]:443 ssl alpn h2,http/1.1 crt-list /tmp/haproxy/ssl/6554226ca7c6c4.56456894.certlist
    mode http
    option http-keep-alive
    option forwardfor

    # logging options

    # ACTION: sni_translation
    # NOTE: actions with no ACLs/conditions will always match
    use_backend %[req.hdr(host),lower,map_dom(/tmp/haproxy/mapfiles/65542596a04585.83628685.txt)]

# Frontend: ssl_redirect ()
frontend ssl_redirect
    bind 0.0.0.0:80 name 0.0.0.0:80 accept-proxy
    bind [::]:80 name [::]:80 accept-proxy
    mode http
    option http-keep-alive

    # logging options

    # ACTION: ssl_redirect
    # NOTE: actions with no ACLs/conditions will always match
    http-request redirect scheme https code 301

# Backend: x_openvpn_as ()
backend x_openvpn_as
    # health checking is DISABLED
    mode http
    balance source
    # stickiness
    stick-table type ip size 50k expire 30m 
    stick on src
    http-reuse safe
    server x_openvpn_as 10.11.23.2:443 ssl verify none

# Backend: webui ()
backend webui
    # health checking is DISABLED
    mode http
    balance source
    # stickiness
    stick-table type ip size 50k expire 30m 
    stick on src
    http-reuse safe
    server webui 127.0.0.1:1443 ssl verify none



# statistics are DISABLED


Any ideas or guidance are welcome and appreciated. Thank you.