Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - Isabella Borgward

#1
Tried my first upgrade with Central yesterday. It seemed to work, only a small step from 25.10 to 25.10_2.
But there doesn't seem to be any useful information in Central about this. There is no log and after clicking Upgrade I wasn't really sure what it was doing. The "Ready to upgrade" and "Upgrade in progress" icons are the same colour which seems like an obvious missed opportunity. We know that new colours are quite cheap nowadays.
#2
What is the meaning of the Expires column? No tooltip, no units, not present in screenshot on current documentation.
#3
Scenario: on TNET interface we have a L3 routed network, one example network is 10.10.45.0/24. Routes to these networks are configured in System: Routes: Configuration.

Outbound NAT policies for these networks have been created manually, I assume they're getting hits because at the far end I can see my test traffic originating from the expected IP.

Here's the problem - the replies are being dropped because of "Default deny / state violation rule". My guess here is that the firewall knows the matching state for these rules, otherwise it wouldn't know the private IP they're for. So what is the real issue?
Devices in the TNET subnet work fine, it's just the routed subnets that don't work.

TNET        2025-06-17T08:42:25    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:42:17    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:42:09    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:42:05    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:42:01    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:41:59    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:41:57    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:41:56    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:41:55    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
TNET        2025-06-17T08:41:54    91.2.119.28:80    10.10.45.49:32975    tcp    Default deny / state violation rule   
WAN        2025-06-17T08:41:54    45.76.130.220:14043    91.2.119.28:80    tcp    let out anything from firewall host itself (force gw)   





#4
Three times in past 4 weeks, disk has filled up on my OpnSense and every time it has been something to do with Zenarmor.
First one was cron jobs that fail to send an error message once per minute. Fixed that.
Second one was mongodb. Sort of fixed that, but haven't actually managed to get the DB engine working again [but at least it's not eating the disk].
This time it is /usr/local/zenarmor/log/active/worker0 logging at about 1MBps. It's mostly lines like this:

2025-03-10T10:01:53.378794 WARN ArrayStream was full, str: osver pos: 8191
2025-03-10T10:01:53.378802 WARN ArrayStream was full, str: ":" pos: 8191
2025-03-10T10:01:53.378811 WARN ArrayStream was full, str: "} pos: 8191
2025-03-10T10:01:53.378820 WARN ArrayStream was full, str: ," pos: 8191
2025-03-10T10:01:53.378828 WARN ArrayStream was full, str: remote_device pos: 8191
2025-03-10T10:01:53.378847 WARN ArrayStream was full, str: ":" pos: 8191
2025-03-10T10:01:53.378855 WARN ArrayStream was full, str: " pos: 8191
2025-03-10T10:01:53.378864 WARN ArrayStream was full, str: ," pos: 8191
2025-03-10T10:01:53.378871 WARN ArrayStream was full, str: community_id pos: 8191
2025-03-10T10:01:53.378879 WARN ArrayStream was full, str: ":" pos: 8191
2025-03-10T10:01:53.378892 WARN ArrayStream was full, str: 1:NLGH3mmPeENTT1aXwYWl5XLicUw= pos: 8191
2025-03-10T10:01:53.378908 WARN ArrayStream was full, str: " pos: 8191
2025-03-10T10:01:53.378916 WARN ArrayStream was full, str: ," pos: 8191
2025-03-10T10:01:53.378924 WARN ArrayStream was full, str: handshake_result pos: 8191
2025-03-10T10:01:53.378931 WARN ArrayStream was full, str: ":" pos: 8191
2025-03-10T10:01:53.378939 WARN ArrayStream was full, str: None pos: 8191
2025-03-10T10:01:53.378964 WARN ArrayStream was full, str: " pos: 8191
2025-03-10T10:01:53.378972 WARN ArrayStream was full, str: }

What is it complaining about here?
#5
How is

local host is behind NAT, sending keep alives
determined? Is it due to what the far-end says ["you are behind NAT"], or is it some other heuristic? I am seeing it in a scenario where there is definitely no NAT at my end ["local host"] and almost certainly not at the far end.
#6
Firewall with 2x WAN interfaces with public IPs. 1 LAN with private IP. Only routes to far end of tunnel are using WAN interfaces.
I see that "local host is behind NAT, sending keep alives" even though there is no NAT involved. How is the firewall determining that NAT is in use?
#7
Two routers set up in HA, two WANS /29 and /30. Standby router cannot use gateway on /30 as there aren't enough IPs, so Monit constantly complains about it being down.
Can I get Monit to ignore the /30 WAN? So long as at least one WAN works, that's "good enough".
#8
I have two WANs. Each WAN has a gateway configured.
I want to be able to manage the firewall remotely to either public IP address.
I have spent quite a bit of time fiddling with this.
Ping works fine to both WANs. I could only get HTTPS working to one at a time.
I have found that the key to fixing this was explicitly setting a reply-to on the management access rule.
The rule consists of a source alias [a list of remote management public IPs], destination of "This Firewall" and a service of Any.
If I leave "reply to:" as "default", it doesn't work. The help for this option says:
Quote
Determines how packets route back in the opposite direction (replies), when set to default, packets on WAN type interfaces reply to their connected gateway on the interface (unless globally disabled). A specific gateway may be chosen as well here. This setting is only relevant in the context of a state, for stateless rules there is no defined opposite direction.
This makes sense - ICMP is stateless and works regardless of this setting. But HTTPS does not work unless I explicitly set reply-to to the interface's default gateway here. This contradicts what the documentation says.
There is a hint about "unless globally disabled", but what is that setting called and where is it?
Even specifying the gateway explicitly with the Gateway setting does not make this work.
#9
/var/db/mongodb/mongod.log is full of this:


2024-07-05T11:08:19.312+0000 I NETWORK  [conn742973] end connection 127.0.0.1:9158 (6 connections now open)
2024-07-05T11:08:20.414+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:9165 #742975 (7 connections now open)
2024-07-05T11:08:20.414+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:9166 #742976 (8 connections now open)
2024-07-05T11:08:20.414+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:9167 #742977 (9 connections now open)
2024-07-05T11:08:20.414+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:9168 #742978 (10 connections now open)
2024-07-05T11:08:20.415+0000 I NETWORK  [conn742975] received client metadata from 127.0.0.1:9165 conn742975: { driver: { name: "mongo-go-driver", version: "v1.7.0" }, os: { type: "freebsd", architecture: "amd64" }, platform: "go1.19.3" }
2024-07-05T11:08:20.415+0000 I NETWORK  [conn742976] received client metadata from 127.0.0.1:9166 conn742976: { driver: { name: "mongo-go-driver", version: "v1.7.0" }, os: { type: "freebsd", architecture: "amd64" }, platform: "go1.19.3" }
2024-07-05T11:08:20.415+0000 I NETWORK  [conn742977] received client metadata from 127.0.0.1:9167 conn742977: { driver: { name: "mongo-go-driver", version: "v1.7.0" }, os: { type: "freebsd", architecture: "amd64" }, platform: "go1.19.3" }
2024-07-05T11:08:20.415+0000 I NETWORK  [conn742978] received client metadata from 127.0.0.1:9168 conn742978: { driver: { name: "mongo-go-driver", version: "v1.7.0" }, os: { type: "freebsd", architecture: "amd64" }, platform: "go1.19.3" }
2024-07-05T11:08:20.716+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:9171 #742979 (11 connections now open)
2024-07-05T11:08:20.716+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:9172 #742980 (12 connections now open)
2024-07-05T11:08:20.716+0000 I NETWORK  [conn742979] received client metadata from 127.0.0.1:9171 conn742979: { driver: { name: "mongo-go-driver", version: "v1.7.0" }, os: { type: "freebsd", architecture: "amd64" }, platform: "go1.19.3" }
2024-07-05T11:08:20.716+0000 I NETWORK  [conn742980] received client metadata from 127.0.0.1:9172 conn742980: { driver: { name: "mongo-go-driver", version: "v1.7.0" }, os: { type: "freebsd", architecture: "amd64" }, platform: "go1.19.3" }
2024-07-05T11:08:20.720+0000 I NETWORK  [conn742980] end connection 127.0.0.1:9172 (11 connections now open)
2024-07-05T11:08:20.720+0000 I NETWORK  [conn742978] end connection 127.0.0.1:9168 (10 connections now open)
2024-07-05T11:08:20.720+0000 I NETWORK  [conn742975] end connection 127.0.0.1:9165 (9 connections now open)
2024-07-05T11:08:20.721+0000 I NETWORK  [conn742976] end connection 127.0.0.1:9166 (8 connections now open)
2024-07-05T11:08:20.721+0000 I NETWORK  [conn742977] end connection 127.0.0.1:9167 (7 connections now open)
2024-07-05T11:08:20.721+0000 I NETWORK  [conn742979] end connection 127.0.0.1:9171 (6 connections now open)
2024-07-05T11:08:21.831+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:9177 #742981 (7 connections now open)



You can see that there were 18 events in 1 second here. I have tried restarting mongodb but it didn't make any difference. Additionally, the log file does not get rotated. I can see that it was rotated once but then never again.



root@router:/var/db/mongodb # ls -alh mongod.log*
-rw-------  1 mongodb  mongodb   336M Jul  5 11:08 mongod.log
-rw-------  1 mongodb  mongodb   4.8K May 10 11:15 mongod.log.2024-05-10T11-15-15

#10
My /var/spool/clientmqueue had literally hundreds of thousands of files in it. They are all just a UUID. The message info indicates that they are caused by this cron job:


# crontab -u nobody -l
# or /usr/local/etc/cron.d and follow the same format as
# /etc/crontab, see the crontab(5) manual page.
SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
#minute hour    mday    month   wday    command
# Origin/Description: Zenarmor/Zenarmor periodicals
*       *       *       *       *       /usr/local/sbin/configctl -d 'zenarmor periodicals' '>' /dev/null '2>&1'



Note the quotes around the >.
This does not work as intended:


# /usr/local/sbin/configctl -d 'zenarmor periodicals' '>' /dev/null '2>&1'
3fc7e3fa-5fbb-46f6-9069-939c2b154e05



This does:


# /usr/local/sbin/configctl -d 'zenarmor periodicals' > /dev/null '2>&1'
#


I don't really know configctl but it also has a -q option to suppress stdout, perhaps that would make sense here.
#11
Reporting > Health > Traffic
Why are the interface graphs in bytes per second not bits per second? Nobody measures network throughput in Bps. Doing it in Bps is almost perverse!
#12
Was "Sticky connections" renamed to "Sticky outbound NAT" at some point?

I see it referenced here:

https://github.com/opnsense/core/issues/2170

but can't find that setting.
#13
Trying to interoperate with Sonicwall firewalls. They allow to create an IPsec tunnel interface and create route policies on it, without assigning IPs to the tunnel, or specifying any local/remote subnets. I am not sure about the terminology of this, but I think this would be an unnumbered VTI.
I don't see a way to do this with the OpnSense UI, but it might just be that I am not familiar with how the UI works. If I choose "Route based" then I have to put IP addresses in, otherwise:

"A valid local network IP address must be specified."

Is is it possible to do this?
#14
Can the interface descriptions seen with SNMP be synced with what is in the web interface?
Eg every interface is seen as the device name:


RFC1213-MIB::ifDescr.1 = STRING: "igb0"
RFC1213-MIB::ifDescr.2 = STRING: "igb1"
RFC1213-MIB::ifDescr.3 = STRING: "igb2"
RFC1213-MIB::ifDescr.4 = STRING: "igb3"
RFC1213-MIB::ifDescr.5 = STRING: "igb4"


but in UI we have the Identifier and Description values which would be more useful than the device name. Running OpnSense on a variety of hardware means that different systems show different names, even though all systems may have LAN, WAN1, WAN2, etc.
#15
Have been experimenting with using a ZT tunnel as a default route for internet traffic.
It works OK once enabled with

zerotier-cli set <networkId> allowDefault=1

but then after a reboot, it's broken - Zerotier cannot establish a connection at all and no traffic is passed. Flip it back with allowDefault=0 , reboot and internet access is restored [albeit no longer over the ZT tunnel].
It is as if Zerotier is trying to use its own default route to establish connectivity for its own traffic, which seems like a silly defect.

We have had some success with this deployment scenario using Teltonika RutOS devices, but they simply don't have the horsepower to handle the throughput we need, hence looking at doing this on Opnsense [and I must say I am pretty damn impressed with Opnsense so far, other than this specific issue].