Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - deviantintegral

#1
Thanks. Here's my normal settings for the SSID. Isolation is off, UAPSD is on, Multicast Enhancement is on, but Multicast control is off.

I set them to match your example with no change. I also created a brand new SSID with default settings, joined the two devices to that network, and saw the same issue.

I was all convinced that the APs had nothing to do with the issue. But then to be sure, I unplugged them all, and used a USB-C network adapter on my iPhone to plug into the LAN. And, it worked fine!

Next, I double checked IPv6 connectivity on the WLAN, and it looks good. I can ping devices, and CloudFlare's speedtest shows an IPv6 address.

As one last idea, I set IPv6 on all interfaces on the Mac to "link local only". I was thinking this would work, given how disabling IPv6 on the LAN fixed the issue. But, it showed the same problem.
#2
I've been having difficulty with AirDrop in a specific scenario that I'm stuck on. First, some background.

  • AirDrop requires WiFi and Bluetooth to work at all.
  • Most of the time, AirDrop will use AWDL to send files with a direct peer-to-peer wireless link after the initial setup has completed. This avoids any connection with your current WLAN or LAN.
  • AirDrop uses an IPv6 link-local address to send data in this case.
  • In some circumstances AirDrop will use a wired connection instead of AWDL. Recently, Apple added support for devices connected directly with USB-C. Macs will also use ethernet if available, both devices are on the wan LAN, and if the heuristic thinks that's better than a direct connection. In this case it uses a regular IPv6 address instead of a link-local one.

It's this last scenario that's broken for me. In particular, if I try to send a file with AirDrop from a Mac to an iPhone or iPad, the initial setup works fine but the actual file transfer hangs. I've tried with two Macs, and three different destination devices, and they all have the same issue. However, I've found two cases where AirDrop still works:

1. If the file is small enough (say a few MB) it sends fine. I think it just sends the file over AWDL instead of trying the wired connection.
2. If I disable IPv6 on the LAN in OPNSense, it works! I can see the data going out from the Mac over it's ethernet connection, and then over wireless to the iDevice. It's only when using SLAAC IPs and not link-local that AirDrop breaks.

This made me think there was perhaps a firewall issue, but there's nothing showing up in the logs. The devices are on the same VLAN anyways and there's a switch in between. Also, while the AirDrop transfer is hanging, I can ping the IPv6 addresses it's using to send.

I did a capture with WireShark as well. It shows a few seconds of encrypted UDP traffic, and then some ICMPv6 with solicitations and advertisements, and then hangs.

IPv6 in OPNSense is set up as a delegated prefix with the "Track Interface" option. I don't have any advanced options set up. Otherwise, as far as I can tell IPv6 is working correctly. At one point I suspected multicast settings on my UniFi APs, but given link-local IPv6 addresses work I don't think that's the case.

Any suggestions?
#3
If I search for `cake-autorate` in the forum search, I get a single result which goes to https://forum.opnsense.org/index.php?topic=38208.msg187250#msg187250.

However, if I search externally (I use Kagi, but here's a Google example), I also get a result pointing to https://forum.opnsense.org/index.php?topic=40706.0 which clearly has the same search term.
#4
Thanks, this was helpful. It turns out I was wrong about "Track Interface" using SLAAC. It actually uses DHCP (so it can set the prefix I presume). Once I figured that out, I was able to use that to infer the configuration to use and set it statically.

https://old.reddit.com/r/homelab/comments/acjzh4/ipv6_primer/ had some good bits too, in particular "you shouldn't subnet smaller than /64".
#5
23.7 Legacy Series / Setting a static IPv6 client address
December 24, 2023, 03:35:54 AM
I'm in the process of setting up a Proxmox server, which needs a static IP assigned for both IPv4 and IPv6.

In the world of IPv4, the way I've normally set it up is:

1. Set DHCP to not issue addresses in a range. For example, only assign addresses 192.168.20.51-192.168.20.254.
2. Set static IPs in that range of the first 50 addresses.

With IPv6, I obtain an address from my ISP over DHCPv6, and have the LAN interface set to track it. I get a /56 from the ISP, and it looks like a /64 is used for LAN clients. Is there anything I need to do to ensure I don't get address conflicts if I just choose a random address in that /64 range? Or does duplicate address detection handle that, and the autoconfigured LAN clients will just generate another address if needed?

I did poke around in the router advertisements settings when enabled, but didn't see anything that would obviously let me narrow the /64 to a different range to reserve other addresses.

Thanks,
#6
Wow, that is it! Thank you. I wonder if it's a bug in VPNKit.

I reported this upstream to the WG mailing list, but the email is currently stuck in a moderation queue.
#7
23.7 Legacy Series / Issues connecting with Wireguard
August 23, 2023, 04:06:15 PM
I've been having an odd issue with my Wireguard setup, both on 23.1 and 23.7. When I connect from my iPhone or Mac running the official wireguard client, many times handshakes will not pass. Sometimes, the "Data sent" counter will go up by tens of MB a second, which is impossible given the network speed, and normal traffic doesn't actually work. If I reconnect the tunnel several times, it will eventually connect fine with no issues. There's no obvious errors in the logs that I see. tcpdump shows the wireguard server responding to the connection.

I also have Wireguard set up on a linux host, and the same two clients never have a problem connecting.

Any suggestions on troubleshooting this?
#8
20.7 Legacy Series / Re: Where disable remote syslog?
January 08, 2021, 06:20:27 PM
This looks to still be a problem as of 20.7.7_1. Did anyone ever find a solution beyond exporting and editing the config by hand?
#9
Here it is. redacted_http_test is the no-ssl backend I've been testing with.

#
# Automatically generated configuration.
# Do not edit this file manually.
#

global
    # NOTE: Could be a security issue, but required for some feature.
    uid                         80
    gid                         80
    chroot                      /var/haproxy
    daemon
    stats                       socket /var/run/haproxy.socket level admin
    nbproc                      1
    nbthread                    1
    tune.ssl.default-dh-param   1024
    spread-checks               0
    tune.chksize                16384
    tune.bufsize                16384
    tune.lua.maxmem             0
    log /var/run/log local0

defaults
    log     global
    option redispatch -1
    timeout client 30s
    timeout connect 30s
    timeout server 30s
    retries 3
    # WARNING: pass through options below this line
    mode http
    option httplog

# autogenerated entries for ACLs

# autogenerated entries for config in backends/frontends

# autogenerated entries for stats


# Frontend: redacted_ssl ()
frontend redacted_ssl
    bind *:443 name *:443 ssl  crt-list /tmp/haproxy/ssl/5cc4fcfb4d50d1.96664841.certlist
    mode http
    option http-keep-alive
    option forwardfor
    # tuning options
    timeout client 30s

    # logging options
    # ACL: host_redacted
    acl acl_5cc4fe5883d959.03367346 hdr_end(host) -i redacted

    # ACTION: map_redacted
    use_backend %[req.hdr(host),lower,map_dom(/tmp/haproxy/mapfiles/5cc509feaa78f3.47022982.txt)] if acl_5cc4fe5883d959.03367346

# Frontend: letsencrypt ()
frontend letsencrypt
    bind *:80 name *:80
    mode http
    option http-keep-alive
    default_backend acme_challenge_backend
    # tuning options
    timeout client 30s

    # logging options

# Frontend: redacted_http_test ()
frontend redacted_http_test
    bind *:8080 name *:8080
    mode http
    option http-keep-alive
    default_backend server1_apache
    option forwardfor
    # tuning options
    timeout client 30s

    # logging options
    # ACL: host_redacted
    acl acl_5cc4fe5883d959.03367346 hdr_end(host) -i redacted

    # ACTION: map_redacted
    use_backend %[req.hdr(host),lower,map_dom(/tmp/haproxy/mapfiles/5cc509feaa78f3.47022982.txt)] if acl_5cc4fe5883d959.03367346

# Backend: acme_challenge_backend (Added by Let's Encrypt plugin)
backend acme_challenge_backend
    # health checking is DISABLED
    mode http
    balance source
    # stickiness
    stick-table type ip size 50k expire 30m
    stick on src
    # tuning options
    timeout connect 30s
    timeout server 30s
    http-reuse never
    server acme_challenge_host 127.0.0.1:43580

# Backend: server1_apache ()
backend server1_apache
    # health checking is DISABLED
    mode http
    balance source
    # stickiness
    stick-table type ip size 50k expire 30m
    stick on src
    # tuning options
    timeout connect 30s
    timeout server 30s
    http-reuse never
    server server1_apache backend.lan:80

# Backend: backend_grafana ()
backend backend_grafana
    # health checking is DISABLED
    mode http
    balance source
    # stickiness
    stick-table type ip size 50k expire 30m
    stick on src
    # tuning options
    timeout connect 30s
    timeout server 30s
    http-reuse never
    server backend_grafana grafana.lan:80
#10
First, I'm quite impressed by the HAProxy and Let's Encrypt plugins, and how they work together.

I'm running opnsense on an APU2 board, and I'm noticing:


  • Slow transfer rates with high CPU usage
  • Latency spikes for unrelated network traffic when HAProxy is under load

For example, the backend web server (a basic Apache setup) can saturate the 1Gb network connection with no issues. If I do a straight HTTP proxy through HAProxy in opnsense (no SSL offloading), performance caps out at about 200Mbit. I expected closer to 300 or 400 Mbits. During a transfer, the HAProxy process is using close to 100% of a CPU core (the board has 4 cores) with 50% system CPU usage. If I ping a host during the speed test from opnsense, latency goes up significantly for all connections going through the firewall.

So far, I noticed that at http://www.haproxy.org it's mentioned that pf causes quite a performance hit compared to Linux, but it's not clear how recent that note is.

Any ideas on how to solve the latency issues? I can live with slower transfer performance, but lag spikes will be a deal breaker for me.
#11
OPNSense is connected via DSL / pppoe for it's WAN connection. I have an OBiHAI SIP bridge for VoIP access. If the WAN IP changes, the old NAT mapping are still used, causing packets to be sent with the wrong source IP address. This breaks WAN connectivity until the states are killed.


  • In the firewall states dump, I filter on port 5060 to see the inbound and outbound mappings.
  • Note your current WAN IP, and click "reload" at the WAN interface in the overview to force a new connection.
  • After the IP has renewed, reload the states dump and note the outbound IP address is the old IP address and not the new one.
  • Killing the states restores WAN connectivity to the SIP bridge.

I've verified the wrong source IP is being sent from a packet capture of the pppoe interface. What's surprising to me is that nothing else other than this one mapping appears to be affected by this.

In the firewall advanced options, I found "Dynamic state reset" which was not enabled. Turning that on fixed the stale mappings. Is there any reason why that option shouldn't be on by default?

This could be related to switching ISPs from one that used DHCP to DSL and PPPoE. Is this a setting normally set during the setup wizard, which would be missed if you manually changed WAN settings after the initial install?
#12
The patch didn't do anything, regardless of the override setting. I did have that enabled.

However, this pointed out a better workaround for now - if I statically set DNS, the bad routes aren't added.
#13
I've got exactly the same problem since upgrading. Deleting the two routes for each DNS server fixes the problem, and renewing the WAN IP brings them back.
#14
One more bit - I just noticed that the UI isn't showing any data for the past 2 weeks. However, disk use has been steadily increasing in that time.
#15
I re-enabled netflow about a month ago, and ever since it's been slowly growing in disk use. This is on a home connection, so it's not /that/ much traffic to log. flowd and flowd_aggregate are both running:

root@blackbox:/var # service flowd status
flowd is running as pid 36517 36532.
root@blackbox:/var # service flowd_aggregate status
flowd_aggregate is running as pid 42215.
root@blackbox:/var # du -sh log/flowd* netflow
3.6G   log/flowd.log
11M   log/flowd.log.000001
11M   log/flowd.log.000002
11M   log/flowd.log.000003
11M   log/flowd.log.000004
12M   log/flowd.log.000005
11M   log/flowd.log.000006
11M   log/flowd.log.000007
11M   log/flowd.log.000008
11M   log/flowd.log.000009
11M   log/flowd.log.000010
5.3G   netflow


I'm going to have to reset the data manually as I'm almost out of disk space. My understanding is simply running flowd_aggregate should be enough here. Is there anything else to check?