Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - gromit

#1
In my IPv6 setup I have IPv6 on WAN configured as DHCPv6 and all the local interfaces configured as "Track Interface". I have several ISC DHCPv6 static mappings configured for these "Track Interface" interfaces, using the "::1:2:3:4" suffix notation accordingly.

Since upgrading to 24.7.10 I've noticed that DHCPv6 static mapping hostnames are not resolved correctly by Unbound. Instead of prepending the DHCPv6-PD prefix to the suffix, it simply returns the suffix as-is. Looking at /var/unbound/host_entries.conf I see both local-data: and local-data-ptr: using just the suffix and not the full prefix+suffix one.

When I view the DHCPv6 static mappings in "Services: ISC DHCPv6: Leases" the correct, full IPv6 addresses are displayed.

Is this a regression in 24.7.10?
#2
Since about release 23.1.2 I have been getting complaints from the Monit service about the RootFs service not working.  The e-mail is as follows:

Does not exist Service RootFs

Date:        Wed, 29 Mar 2023 17:07:23
Action:      restart
Host:        my.opnsense.host
Description: unable to read filesystem '/' state

Your faithful employee,
Monit


This still doesn't work as of the recent 23.1.5 update.

I am running OPNsense 23.1.5 on a ZFS-based install.  The install was bootstrapped from a FreeBSD install via opnsense-bootstrap.  It does have a ZFS file system mounted on /, so I'm not sure how to interpret the "unable to read filesystem '/' state" message.

Is anyone else experiencing this?  If so, is there a fix?  For now, I have disabled the service check so I don't get spammed with Monit e-mails about this service failing.  I would like to have the check working, though.
#3
I have a 3-port LACP LAGG configured on my OPNsense system that is connected to a Cisco SG350 managed switch.  This has worked fine in previous versions of OPNsense going back years but since upgrading to 23.1 it gives problems.  Specifically, it has trouble becoming active (configured) after boot.  The individual laggports will change in status, with the flags moving through various states such as <>, <COLLECTING>, <ACTIVE, COLLECTING>, and even with some (but not all) in the desired <ACTIVE, COLLECTING, DISTRIBUTING> state.

This is what it looks like when it is properly configured:

$ ifconfig lagg0
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4812098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWFILTER,NOMAP>
ether 00:eb:ca:c0:05:c5
laggproto lacp lagghash l2,l3,l4
laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
laggport: em2 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
laggport: em3 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
groups: lagg
media: Ethernet autoselect
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>


It seems that even after 10 minutes or so that the LAGG is still cycling through various states with the member laggports and the interfaces built on this LAGG going UP and DOWN accordingly as it tries to configure.  The easiest way to fix it is to restart the Cisco switch.  ???

Has the way LAGG interfaces are configured changed in 23.1?  I see these two entries in the Changelog for 23.1:


  • interfaces: register LAGG, PPP, VLAN and wireless devices as plugins
  • src: assorted FreeBSD 13 stable fixes for e.g. bpf, bridge, bsdinstall ifconfig, iflib, ipfw, ipsec, lagg, netmap, pf, route and vlan components

I don't understand the import of either of those statements.  This setup worked flawlessly up to 22.7.11 and so whatever the problem is now appears to have crept in with 23.1.

Any hints or suggestions on how to get the LAGG to activate reliably are most appreciated.
#4
General Discussion / Periodic.conf tunables?
November 10, 2022, 05:09:05 PM
The official OPNsense documentation about tunables (https://docs.opnsense.org/manual/settingsmenu.html) says they are for loader.conf and sysctl.conf tunables. I have a ZFS setup on which I would like to enable a periodic scrub. FreeBSD has a built-in /etc/periodic/daily/800.scrub-zfs task that is disabled by default. I'd like to enable this via the daily_scrub_zfs_enable setting.

Assuming this can't be added as a tunable to the "System: Settings: Tunables" section, the normal way in FreeBSD would be to add it to /etc/periodic.conf or /etc/periodic.conf.local. Will the latter persist across updates?

(Are there any plans to add periodic settings to "System: Settings: Tunables"?)
#5
I have a "road warrior" IPSec IKEv2 VPN setup that is working for me, at least when it comes to split-tunnelling. I have been trying to get it to work with a split-DNS configuration so that VPN clients only use the VPN-provided DNS servers for the local VPN DNS domain and all other DNS requests (for domains other than that) should use the client's default DNS resolver.  It's the split-DNS setup that isn't working for me.

Can anyone confirm whether or not they've got this working with the built-in IKEv2 client in macOS Big Sur or newer?  If so, what was the magic needed to get this working?

I use Apple Configurator to create IKEv2 VPN profiles for macOS, so I don't mind if the solution involves that.
#6
I have a HA setup currently running OPNsense 22.1.5.  The HA has been working well but recently, when I went to configure HA DHCP (https://docs.opnsense.org/manual/how-tos/carp.html#optional-setup-dhcp-server), I find that the resultant setup does not work correctly for me---at least it does not work as I expect it to do.

The way in which it is not working is this: I have the "DHCP Registration" option of Unbound enabled but hostnames are not registered correctly in the local DNS.  Hostnames are registered only in the DNS of the HA node whose DHCP server issued the DHCP lease of the client.

Is this how it is supposed to work?  As I say above, this is not how I would expect things to work.  I would expect DHCP-registered hosts to be resolvable in both the active and passive nodes.

Here is an example:

HA Node A leases include this:

Interface   IP address      MAC address       Hostname Description Start                   End                     Status Lease type
PLUMBING    192.168.115.174 00:50:56:97:38:32                      2022/04/11 18:29:56 UTC 2022/04/11 20:29:56 UTC |||||  active
APP         192.168.116.177 00:50:56:97:89:1b rum-dev              2022/04/11 18:32:02 UTC 2022/04/11 20:32:02 UTC |||||  active


HA Node B leases include this:

Interface   IP address      MAC address       Hostname Description Start                   End                     Status Lease type
PLUMBING    192.168.115.174 00:50:56:97:38:32 awx-test             2022/04/11 18:29:56 UTC 2022/04/11 20:29:56 UTC |||||  active
APP         192.168.116.177 00:50:56:97:89:1b                      2022/04/11 18:32:02 UTC 2022/04/11 20:32:02 UTC *      active


||||| = "Online" graphic; * = "Offline" graphic.


From HA Node A, I can resolve rum-dev but not awx-test and vice versa from HA Node B. I would expect that the "DHCP Registration" Unbound option would allow DHCP hostnames to be resolvable from both Node A and Node B.

In my DHCPv4 configuration I have the CARP VIP set for "DNS Servers" and "Gateway" and the "Failover IP" points to the real IP address of the opposite node in the HA pair.

Should this work or am I misconfiguring something?

Note: Static DHCP entries work fine, DNS-wise.
#7
I just upgraded to 22.1.3 from 22.1.2_1 and upon rebooting into the new version noticed my Hurricane Electric Tunnel Broker was not working properly.  The gateway was no longer listed in the "Gateways" widget on the dashboard.  Looking further in System: Gateways: Single I saw that the HE gateway was greyed out with a status of "Pending" and a priority of "defunct (upstream)".

I deleted my HE tunnel and recreated it, but it still did not work.

The change log for this release leads off with this statement:

QuoteThis update includes groundwork for interface handling improvements making the boot more flexible in complex interface assignment scenarios involving GIF, GRE and bridge devices.

Could my HE Tunnel Broker problems be related to this?

I had a recollection that my gif0 previously had the HE tunnel IPv6 endpoints defined on it and this was no longer the case after upgrading.  I manually applied these to gif0 (as per HE's FreeBSD instructions) and manually defined an IPv6 default route and that has at least got IPv6 going again for me.

Is there a change in the way HE Tunnel Broker needs to be set up under 22.1.3??

#8
I'm running OPNsense 21.7.5-amd64 in a HA setup and have an IPSec tunnel defined for road warrior use.  Most of the Phase 2 entries are to allow remote clients to access subnets on the OPNsense system (e.g., "LAN subnet").  One Phase 2 entry is to a single host accessible via public IP via the WAN interface of the OPNsense system.  Because this is a HA system, I'm using manual outbound NAT.  I have outbound NAT rules defined for the WAN interface so that traffic on "IPSec net" is NATted using the CARP address of the WAN interface.  For now, I allow all traffic to pass on the IPSec (enc0) interface.

Unfortunately, clients connected to the IPSec VPN are unable to reach the public IP through the VPN.  Tcpdump reveals that outbound NAT is not being performed: the client traffic passes out the WAN with the original IPSec client IP as the source address.

I do this on a pfSense box and it works there.  The setup is not quite the same: the pfSense is not HA and so just uses automatic, not manual, outbound NAT.  However, the implementation appears different.  On OPNsense, selecting "IPSec net" results in a rule like "nat on cxl0 inet from (enc0:network) to any -> ..." (cxl0 is WAN) whereas on pfSense there is an alias of IPs (tonatsubnets) automatically generated and a rule like "nat on $WAN inet from <tonatsubnets> to any -> ..." results.

The pf.conf man page states ":network" "Translates to the network(s) attached to the interface" however "enc0" does not have any IP addresses associated with it.

Is this the reason why outbound NAT isn't working in this case: because "(enc0:network)" does not evaluate to any IP address(es)?  On pfSense, the outbound NAT sources are explicit lists of IP addresses.

It should be noted that outbound NAT works on my OpenVPN VPN with "OpenVPN net" as the source.  However, the ovpns1 interface in the openvpn interface group does have IP addresses defined on it.

Is the correct solution for such outbound NAT to use "Single host or Network" and use the IPSec VPN subnet instead of selecting "IPSec net" for "Source address" in the manual outbound NAT definition for IPSec traffic?
#9
I'm converting a pfSense 2.4.5_1 HA firewall setup over to an OPNsense 21.1.1 setup and am having trouble getting LDAP authentication working successfully on OPNsense.

I believe I've figured out the correspondence from the working pfSense config to OPNsense.  However, in pfSense there's a "Peer Certificate Authority" setting in the LDAP server setup whereas there is nothing corresponding to that in OPNsense.  Apparently, you are supposed to just ensure that all the certificates presented by the LDAP server are in the Trust settings.

Well, I've put what I believe are all the certificates into Trust but, using the System : Access : Tester all I get is this:

QuoteThe following input errors were detected:

  • Authentication failed.

I can't find any errors logged relating to this under System : Log Files : General.

Is there some other place I can look for logging information relating to LDAP?  (It would be helpful if the "Tester" would output more verbose debug information, at least as an option.)  It would be nice to know at which stage it is failing.

Is it possible to increase the verbosity of the LDAP auth process to debug this problem.  It's frustrating that this works on pfSense but I can't get it to work on OPNsense.  :(