Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - deajan

#1
I've also encountered multiple (and strange) resolve errors with unbound like the following:


2024-09-18T13:46:40 Error unbound [54445:2] error: SERVFAIL <somedomain.tld. A IN>: all servers for this domain failed, at zone somedomain.tld. upstream server timeout
2024-09-18T13:43:36 Error unbound [17415:1] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: all servers for this domain failed, at zone 64.92.188.in-addr.arpa. no server to query no addresses for nameservers
2024-09-18T13:43:36 Error unbound [17415:0] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: exceeded the maximum nameserver nxdomains
2024-09-18T13:43:30 Error unbound [17415:3] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: exceeded the maximum nameserver nxdomains
2024-09-18T13:33:04 Error unbound [17415:3] error: SERVFAIL <85.21.107.40.zen.spamhaus.org. A IN>: exceeded the maximum nameserver nxdomains
2024-09-18T13:32:26 Error unbound [17415:2] error: SERVFAIL <somedomain.tld. A IN>: all servers for this domain failed, at zone somedomain.tld. from 194.0.34.53 no server to query nameserver addresses not usable


After reading alot of documentation, one guy said that ISPs may tamper with DNS.
In my setup, I've got 3 internet providers, so I configured Unbound to use WAN1 only, then WAN2 then WAN3.
While doing dns requests, I noticed that WAN1 provider tampered (probably) with DNS since both WAN2 and WAN3 produced good results, but WAN1 didn't.

Hopefully this might help some other people.
#2
I've also encountered multiple (and strange) resolve errors with unbound like the following:

```
2024-09-18T13:46:40   Error   unbound   [54445:2] error: SERVFAIL <somedomain.tld. A IN>: all servers for this domain failed, at zone somedomain.tld. upstream server timeout   
2024-09-18T13:43:36   Error   unbound   [17415:1] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: all servers for this domain failed, at zone 64.92.188.in-addr.arpa. no server to query no addresses for nameservers   
2024-09-18T13:43:36   Error   unbound   [17415:0] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: exceeded the maximum nameserver nxdomains   
2024-09-18T13:43:30   Error   unbound   [17415:3] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: exceeded the maximum nameserver nxdomains   
2024-09-18T13:33:04   Error   unbound   [17415:3] error: SERVFAIL <85.21.107.40.zen.spamhaus.org. A IN>: exceeded the maximum nameserver nxdomains   
2024-09-18T13:32:26   Error   unbound   [17415:2] error: SERVFAIL <somedomain.tld. A IN>: all servers for this domain failed, at zone somedomain.tld. from 194.0.34.53 no server to query nameserver addresses not usable
```

After reading alot of documentation, one guy said that ISPs may tamper with DNS.
In my setup, I've got 3 internet providers, so I configured Unbound to use WAN1 only, then WAN2 then WAN3.
While doing dns requests, I noticed that WAN1 provider tampered (probably) with DNS since both WAN2 and WAN3 produced good results, but WAN1 didn't.

Hopefully this might help some other people.


#3
Found where to add the fsfreeze hook, see https://github.com/opnsense/core/issues/7681

Nevertheless, there is a bug in qemu-guest-agent that doesn't launch thaw script properly :(
#4
So I finally tried a last solution posted on a FreeBSD forum https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059

Setting the following in `/boot/loader.conf` solved the issue:

Adding parameters to /boot/loader.conf


#hw.vtnet.X.tso_disable="1"
hw.vtnet.tso_disable="1"
hw.vtnet.lro_disable="1"
#hw.vtnet.X.lro_disable="1"
hw.vtnet.csum_disable="1"
#hw.vtnet.X.csum_disable="1"


I've jumped from remote uploads at 20-30Mbits to a whopping 300Mbits ^^
I've previously tried this with ethtool without success.
Also, I had to reboot in order for the changes to take effect.
#5
You could check in the system tunables where you have `net.inet.tcp.tso` setting.

Have you selected Zenarmor native routed L3 native netmap driver ?
#6
I've added the transfer tunnel network into the allowed IPs on each peer, and voilĂ , everything works as expected.
Sorry for the noise, should have found that myself.

Thanks for your help @chemlud
#7
Nope, transfer net isn't in allowed ips, and of course this makes perfect sense, since wireguard would just deny the tunnel ips themselves.
I'll check that once I am onsite and report back.
Thanks.
#8
I've got a any any firewall rule on both sides on the wireguard (group) interface.
What broader firewall rule am I supposed to create ?
#9
Loading suricata rules creates a python process that indeed maxes out CPU, but should only be slow, not freeze your OPNSense instance.

This loading process also consumes alot of RAM, you should check whether this is your culprit.

From my experience, running OPNSense from too lower end hardware isn't the best.

I've got a couple of J4125 (2Ghz 4 cores) boxes running OPNSense, and they needed an extra cooling fan just to not go through the roof, on top of slowing down throughput when scaling down CPU frequency.

last but not least, don't run OPNsense on cheap realtek NICs, which could explain why zenarmor isn't happy with the offloading.
#10
General Discussion / Re: Synchronization with LDAP server
February 22, 2024, 06:37:08 PM
Okay, I actually retried my whole config.

Automagic user creation from LDAP when connecting to OpenVPN works, unless you set "Enforce local group" in OpenVPN config like I did.

So this is basically a security issue, since if I remove a LDAP user from a let's call it "VPN GROUP" on the LDAP server, the user still can connect, since the user already exists on OPNSense.

I have setup an extended query like `&(memberOf:1.2.840.113556.1.4.1941:=CN=VPN GROUP,DC=domain,DC=local)(objectCategory=person)` but still can connect to OpenVPN once I've removed a user from the ldap "VPN GROUP".

[EDIT] After removing the recursive ldap attribute for memberOf, adding / removing users from VPN GROUP limits it's ability to VPN connect like it should. [/EDIT]
#11
General Discussion / Re: Synchronization with LDAP server
February 22, 2024, 06:25:27 PM
AFAIK as I took my config, no.
Setup with 'Automatic user creation' and 'synchonize groups', but this seems only to work when trying to auth directly on the firewall, not when trying to connect via OpenVPN with LDAP support.

Perhaps I am wrong  (I would love to) ?
#12
General Discussion / Re: Synchronization with LDAP server
February 22, 2024, 06:21:14 PM
I also need to periodically click the import button, so OpenVPN users can connect.
Would be nice be able to automatically sync users.

Any CLI command perhaps ?
#13
Tunnel network is ouside of the site networks, eg the addresses are 192.168.100.1/24 and 192.168.100.2/24.
Allowed networks are 10.0.0.0/24 on site B and 10.0.1.0/24 on site A.

I don't have any blocked traffic, and every "non firewall" IP can happily communicate with every remote IP.

It's only both firewalls that cannot ping each other.

If I happen to setup an outgoing NAT "this firewall" to "remote" translate to LAN address, the firewalls can ping each other, but this just doesn't seem right.

As a side note, I cannot ping the remote tunnel IPs, eg Side A cannot ping tunnel IP side B and vice versa.


#14
Hello,

I've setup a Wireguard site to site tunnel between two OPNSense 24.1.2_1 instances.
So far so good, tunnel is up, firewall rules allow any IPv4 traffic on "Wireguard (Group)" interfaces.

From any computer on site A (10.0.0.0/24) I can ping any computer on site B (10.0.1.0/24) and from B to A, so everything looks good.

But, ping (and others protocols) doesn't work from the firewall itself, eg OPNSense A (10.0.0.1) to OPNSense B (10.0.1.1), neither does it work from OPNSese B to OPNSense A.

Now the strange part is, if I happen to add the OPNSense source IP to the ping, eg `ping -S 10.0.0.1 10.0.1.1`, the ping works.

I'm a bit puzzled here.
The routing tables look good (10.0.1.0/24 via wg0 on OPNSense A and 10.0.0.0/24 via wg0 on OPNSense B).
It looks like the originating IP isn't good when running ping from OPNSense.

So basically, from OPNSense A:
`ping 10.0.1.1` does not work
`ping -S 10.0.0.1 10.0.1.1` works

Why do I need to specify the source IP when trying to ping the other firewall ?
I need the firewalls to be able to speak with eachother (for DNS resolution), how can I achieve this ?

Looks like a bug to me.

Best regards.

PS: I've verified (multiple times) my config according to the docs.
Any idea is welcome ^^

PS2: Shall I configure an outgoing NAT rule ? Doesn't ring right to me.
#15
I understood that this is a RAM disk, which for obvious reasons cannot be quiesced.

Of course I could temporarily disable Zenarmor for backups, but automating this would be some kind of hell.

There must be a way to exclude specific disks somewhere, I just cannot find the freeze scripts to configure so.
Any help would be appreciated.