Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - aptalca

#1
And here are the gateway configs for both
#2
Thanks, I get that. But I'm using these interfaces to push entire lan traffic through in a gateway group config with failover. They each connect to one peer with Allowed IPs set to "0.0.0.0/0" so they are meant to be client exclusively and not site to site or server or a combo of those 3.

The part that baffles me is how come the ones I created a year ago have their auto rules set, but the new ones don't even though all the settings look identical.

I'm attaching 2 sets of pictures, one for each wireguard tunnel setup:
a. interface config
b. wireguard interface config

The one labeled "1" in the picture names is for the tunnel I created a year ago and has the correct automatic outbound nat rules shown in the screenshot in the first post above.
The one labeled "2" in the picture names is for the new tunnel for which no auto rules are set, instead, it is included in the "source networks" of the other auto rules.
#3
Question:

What factors are considered when the firewall creates/modifies automatic outbound nat rules? Perhaps a link to the code or documentation?

I tested further and the recycling of the wg device or the interface name is not the issue.

No matter how I create the wireguard tunnels and interfaces, automatic rules never treat them as an outbound connection and never create rules. Instead, it adds it as a source network to the other existing wireguard tunnel rules.
#4
Hi,

I've been using opnsense for some years with multiple wireguard interfaces and everything was good. Outbound nat rules were being automatically created for all the wireguard (client) interfaces correctly.

But after updating to 24.1.5_3 and adding a new wireguard client interface, I noticed the automatic nat outbound rules are treating it differently (as if it was a server config). I double checked and compared all the existing wg interfaces, gateways and their instance configs to the new one and I can't spot a difference. Perhaps I'm missing something or perhaps there is a bug somewhere. I would appreciate another pair of eyes.

Previously I had 4 wireguard instances as listed below:
1. wgserver (wg0) - a server instance connected to many peers, tunnel address 10.1.13.1/24 (disable routed unchecked, no gateway defined)
2. wgclient-tg (wg3) - client connected to a single remote peer, tunnel address /32 (disable routes checked, gateway address manually defined)
3. wgclient-tg-can (wg4) - client connected to a single remote peer, tunnel address /32 (disable routes checked, gateway address manually defined)
4. wgclient-tg-backup (wg5) - client connected to a single remote peer, tunnel address /32 (disable routes checked, gateway address manually defined)

With those, I had manually created gateways for the 3 client instances with matching IPs, and interfaces assigned and enabled.

Nat outbound rules were automatically created for all three interfaces (2 each, 1 one for ISAKMP and both referencing all the relevant source networks including the wireguard server interface "wg0")


Just now I added a new wireguard client instance "wgclient-tg-nj". This one got a recycled "wg1" (previously used but deleted instance). It has the same settings as the other wg client instances: client connected to a single remote peer, tunnel address /32 (disable routes checked, gateway address manually defined)

I also created a matching gateway and assigned and enabled the interface.

The nat outbound automatic rules are treating it as if it's a wg server instance. Instead of creating new rules for it, it just added that interface to source networks for the other existing rules.

Attached is a screenshot of the new automatic outbound rules where you can see the rules auto created for the 3 wireguard client interfaces but the 4th one is just added as a source network for them instead of getting its own rules.

I know I can create manual rules, but I'm just baffled as to why this new one is treated differently than the other 3 client interfaces that seemingly have the same exact settings.

Thanks


tl;dr
Have 3 existing client wireguard interfaces, auto nat rules are created properly
Added a new client wireguard interface, no new auto nat rules created
#5
I did 4 spaces. I'm not sure whether it was the indents or the removal of the quotes around the view name that fixed the issue as I did both at once and it worked.
#6
Quote from: aptalca on August 05, 2023, 03:21:59 AM
I also experienced a similar issue.

Prior to 23.7, I was using a custom unbound conf with an access-control-view defined.

Once I upgraded to 23.7, unbound would no longer start. Removing the access-control-view allows unbound to start. No idea what's causing it.

Here's my redacted sample conf:


server:

access-control-view: 192.168.2.0/24 "vlan10"

local-zone: "domain.url" redirect
local-data: "domain.url 86400 IN A 192.168.1.1"

view:
name: "vlan10"
local-zone: "localdomain" deny


I'm trying to prevent local dns lookups from vlan10

It turns out my issue was a formatting issue. I got rid of the quotes around the view name and fixed the indents and now it starts with the following custom config. I guess the older version of unbound tolerated formatting issues but the newer version doesn't.


server:

access-control-view: 192.168.2.0/24 vlan10

local-zone: "domain.url" redirect
local-data: "domain.url 86400 IN A 192.168.1.1"

view:
    name: vlan10
    local-zone: "localdomain" deny
#7
I also experienced a similar issue.

Prior to 23.7, I was using a custom unbound conf with an access-control-view defined.

Once I upgraded to 23.7, unbound would no longer start. Removing the access-control-view allows unbound to start. No idea what's causing it.

Here's my redacted sample conf:


server:

access-control-view: 192.168.2.0/24 "vlan10"

local-zone: "domain.url" redirect
local-data: "domain.url 86400 IN A 192.168.1.1"

view:
name: "vlan10"
local-zone: "localdomain" deny


I'm trying to prevent local dns lookups from vlan10
#8
Hardware and Performance / Re: TRIM on DEC750
May 09, 2022, 03:44:26 PM
Thanks for this post.

I just did a brand new install on an ssd with zfs and was trying to figure out whether trim was enabled.

According to the commands you provided, trim was enabled and I could run it manually, however autotrim was not enabled.

"zpool set autotrim=on zroot" took care of that.