Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Greelan

#1
FWIW, I migrated my rules today (I recently transitioned to dnsmasq so decided why not continue the transition xD) and thanks to the migration assistant it was seamless. Nice work team!
#2
I had been using that already, but the tunnel would stop working after several weeks.

Mullvad support also told me that they wouldn't support psk-exchange anymore.
#3
PR closed due to OPNsense's security posture, implemented via devd instead. [Edit: logging added]

cat /usr/local/etc/devd/wg1-postup.conf
notify 100 {
    match "system" "IFNET";
    match "subsystem" "wg1";
    match "type" "LINK_UP";
    action "subsystem=$subsystem; if /usr/local/sbin/mullvad-upgrade-tunnel -wg-interface ${subsystem}; \
      then logger -t ${subsystem}-postup mullvad-upgrade-tunnel completed; \
      else rc=$?; logger -t ${subsystem}-postup mullvad-upgrade-tunnel failed, rc=${rc}; \
      fi";
};
#5
I need to run a PostUp command when my Mullvad WG interface comes up (to implement quantum resistant tunnelling: https://mullvad.net/en/help/quantum-resistant-tunnels-with-wireguard#modify-config).

I've successfully built the Mullvad utility for FreeBSD, and it works fine on the command line to establish ephemeral peers over the established tunnel to negotiate a PSK.

However, this needs to be run each time the tunnel is established.

There isn't any PostUp (or PostDown, PreUp or PreDown) option in the WG UI in OPNsense to easily add this. I know OPNsense doesn't directly use wg-quick, but there is also no equivalent option.

Is there another good way to do so? Or do I need to look at implementing changes to the OPNsense code to add advanced options in the UI to facilitate this?
#6
Just wanted to chime in to say kudos to Franco for having the courage to overhaul dhcp6c. It's been a long-neglected part of the nix/bsd universe and was in need of some tlc. It just staggers me that this hasn't happened already at an industry level
#7
Quote from: planetf1 on January 08, 2025, 09:08:39 AMI literally set my virtual ip as 'fd77:2ac4:81ba::/48' which seems to work for clients getting a ULA, but also causes an issue with ntp if it tries to bind. You mentioned a /64 - did you use the CIDR similar to above, or an actual address? Was the type of the virtual ip just a regular virtual ip, or other?

Sorry, didn't get the notification for this. You've probably solved it/moved on, but to answer the question, they are addresses in CIDR notation, like: fdfd:2553:8868:66::1/64. The mode is IP Alias.
#8
24.7, 24.10 Legacy Series / Re: Disk read errors
August 02, 2025, 02:00:53 PM
RRD is disabled, netflow is disabled, don't use unbound
#9
24.7, 24.10 Legacy Series / Re: Disk read errors
July 22, 2025, 12:08:23 PM
Less than a year in with the new disk, it's up to 133TB written (19%). This is even with things like netflow disabled. Sad.
#10
Finally got around to looking into this. After removing all my configuration for Mullvad in OPNsense to build it again, it turns out that time on my account had expired at Mullvad's end and so all keys had been removed. Never got a notification from Mullvad. Once I paid up my account and reconfigured OPNsense, all is working again.
#11
I noticed the same thing yesterday as well. Not sure when it started. Don't have time to troubleshoot atm but will have to investigate. Not sure whether it a change at Mullvad's end or with OPNsense.
#12
Quote from: franco on March 03, 2025, 08:12:54 AMmimgmail repo? ;)

OPNsense repo actually (os-postfix plugin). But running the update again after installing 25.1.2 bumped the Postfix version and fixed the issue (as has been previously posted).
#13
24.7, 24.10 Legacy Series / Re: Disk read errors
October 29, 2024, 12:29:45 PM
Quote from: franco on August 24, 2024, 05:32:26 AM
Ok, this could coincide with

community/23.7/23.7.12:o system: change ZFS transaction group defaults to avoid excessive disk wear

We did have to apply this change because ZFS was wearing out disks with its metadata writes too much even when absolutely no data was written in the sync interval. You could say that ZFS is an always-write file system. Because if you always write the actual data written will wear the drive, not the metadata itself. ;)

In your case it has probably been wearing out the disk before this was put in place. That's at least 2 years worth of increased wear.


Cheers,
Franco

Franco, was this change applied to existing systems, not just new installations?

Because two months into my new disk, I already have 23 TB of writes ...
#14
24.7, 24.10 Legacy Series / Re: Disk read errors
August 24, 2024, 01:46:55 PM
Quote from: Patrick M. Hausen on August 24, 2024, 01:18:44 PM
The interesting value for nominal/guaranteed endurance can be viewed with smartctl -a or smartctl -x. For an NVME drive it's "Percentage Used:" while for a SATA drive it's "Percentage Used Endurance Indicator".

In this particular case from one of your first posts:

Percentage Used:                    100%

So the disk is worn out according to specs and apparently in reality, too.

I monitor the wear indicators for my NAS systems in Grafana like in the attached screen shot.

Kind regards,
Patrick
Yeah, we already established that, and that's why the disk has been replaced. The last post before yours wasn't from me xD
#15
24.7, 24.10 Legacy Series / Re: Disk read errors
August 24, 2024, 05:07:47 AM
Almost exactly 3 years. It was a conversion from a UFS install and so the disk/system is around 4 years old or so