Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - CanadaGuy

#1
Quote from: doktornotor on August 25, 2024, 03:17:34 PM
If you never use it, definitely disable the netflow thing. Will make life a whole lot longer for your SSD, if using one.
Thanks, that's a good tip. In principal I wish I had an actual use for it, but you're absolutely right and I hadn't considered that. I suppose that's where the off-device logging/netflow is useful.

Incidentally, those log messages have disappeared since I did a reset. Since they weren't there before one of the recent updates, I'm thinking something was changed during the upgrade causing that issue. If it was writing 5000 log messages per day (approx) then I could see how that might cause problems.
#2
Thanks, I'll try those. I never look at the data anyway.

For reference it's a 4 core 3 GHz CPU, with 8 GB ram (only 1.5 GB ever used).
#3
Reboot brought everything back, so it doesn't seem like a systemic malfunction. I'm inclined to think that with it being new software, there could be something else going on that requires the right conditions.

Looks like the default (I would never have changed it) is local logging is enabled per the image.

Thinking about another situation, UI EdgeRouters can crash when the local fs is filled up with logging. Interestingly, I currently have pages and pages (like hundreds) of this:

2024-08-24T12:28:28-04:00 Notice flowd_aggregate.py flowparser failed to unpack flow_times (unpack requires a buffer of 8 bytes)
2024-08-24T12:28:28-04:00 Notice flowd_aggregate.py flowparser failed to unpack agent_info (unpack requires a buffer of 16 bytes)
2024-08-24T12:28:28-04:00 Notice flowd_aggregate.py flowparser failed to unpack if_indices (unpack requires a buffer of 8 bytes)
2024-08-24T12:28:28-04:00 Notice flowd_aggregate.py flowparser failed to unpack octets (unpack requires a buffer of 8 bytes)
2024-08-24T12:28:28-04:00 Notice flowd_aggregate.py flowparser failed to unpack proto_flags_tos (unpack requires a buffer of 4 bytes)
2024-08-24T12:28:28-04:00 Notice flowd_aggregate.py flowparser failed to unpack flow_times (unpack requires a buffer of 8 bytes)
2024-08-24T12:28:28-04:00 Notice flowd_aggregate.py flowparser failed to unpack agent_info (unpack requires a buffer of 16 bytes)


here's the log from last night when it was last known working to my reboot this morning. August 21st at the bottom of the log is when I did the latest update.
2024-08-24T09:32:02-04:00 Notice kernel The Regents of the University of California. All rights reserved.
2024-08-24T09:32:02-04:00 Notice kernel Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
2024-08-24T09:32:02-04:00 Notice kernel Copyright (c) 1992-2023 The FreeBSD Project.
2024-08-24T09:32:02-04:00 Notice kernel ---<<BOOT>>---
2024-08-24T09:32:01-04:00 Notice syslog-ng syslog-ng starting up; version='4.8.0'
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum done
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2024-08-24T01:11:13-04:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum done
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2024-08-23T17:11:02-04:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum done
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2024-08-23T09:10:47-04:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
2024-08-23T07:07:09-04:00 Notice dhclient dhclient-script: Creating resolv.conf
2024-08-23T07:07:09-04:00 Notice dhclient dhclient-script: New Hostname (igc0): CPE88c9b3bf769e-CMdc360ca0e2cc
2024-08-23T07:07:09-04:00 Notice dhclient dhclient-script: Reason RENEW on igc0 executing
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum done
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2024-08-23T01:10:39-04:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum done
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2024-08-22T17:10:35-04:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum done
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2024-08-22T09:09:59-04:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum done
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2024-08-22T01:09:05-04:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum done
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2024-08-21T17:08:14-04:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
#4
It's an OptiPlex 5050 SFF so LAN is integrated IntelĀ® i219-V Ethernet LAN 10/100/1000.
WAN is IO Crest 2.5 Gigabit Ethernet PCI Express PCI-E Network Interface Card 10/100/1000/2500 Mbps RJ45 LAN Intel I225 Chipset, Black, SY-PEX24076.

it has been running great for a year and a half (mostly...few screw ups on my part), but haven't made any layer 2 or layer 3 changes in probably a year.

Your statement is intriguing because I could imagine that a new release with many upgrades may result in new/changed settings.
#5
I updated to 24.7 shortly after it was released, then .1 and .2 on August 21st. Things have been fine, however last night at some point it simply stopped switching VLANs (at least) but maybe routing too. All I know is that all my VLAN trunks and routing stopped working. A reset started it up again.

I couldn't reasonably do console access as my password is stupid long, and I don't have an easy to use console and I didn't want to try troubleshooting using my phone LTE connection.

I'm not experienced in looking for/parsing logs. Is there a way to find out when and why it stopped last night? I've already rebooted, so does that mean logs are gone?
#6
When I reboot my opnsense box, my Cisco ATA seems to have issues with opnsense, in that the ATA somehow gets a stuck state in the firewall and the state never times out (after days for example) or otherwise clears. Once I delete that stuck state, the ATA connects and it is good to the next reboot.

1) Can someone describe or point me to how I would script the removal of a firewall state based on source IP (and maybe destination port)
2) Can someone describe how I might apply this to run after a delay after opnsense starts up?

I do believe it is related to FreeBSD or opnsense, as I have a similar issue (I think) with wireguard tunnels that run on a host behind opnsense. I don't have the skills or knowledge to debug this myself, but would be open to work with someone to resolve the root issue. I did note have these issues with these exact devices behind my UI EdgeRouter.
#7
Could a check be added to update that if a shell was configured before the update, that at least /bin/sh is configured afterwards?
#8
health audit? is that an opnsense thing? I wasn't able to access as it is headless, and the reboot restored things. It was routing and reading the config just fine as all my subnets were doing what they were supposed to do.
#9
I had the same issue. After a power outage WebUI didn't come up and couldn't SSH with my non-root user. Had to reboot opnsense to get WebUI back, then restore login shell.

You mention bash, which I recall may have been added by a plugin in 23.1? I think I had the same configuration.
#10
We had a power outage today and upon reboot the WebUI wasn't loading. I tried to SSH to restart the WebUI, but I was getting a keyboard-interactive prompt despite knowing the password was correct. After restarting opnsense, I could get back into the UI and noticed the result in the attached image. After returning it to /bin/sh for my user, I could login as usual.

Could this have happened during the 23.7 update? I rarely SSH in so it's possible it was due to an issue from a while ago. Anyone else have this problem?

Update: I just found this post https://forum.opnsense.org/index.php?topic=35415.msg171845
#11
I was running into issues with radvd not starting, after I made some IPv6 changes. After debug I discovered that there will be no error or warning presented when attempting to set a static IPv6 address that already exists as a virtual IP (on the same interface).

I accept this is 100% user error, but perhaps checking the virtual IP list when adding an interface static IP could prevent similar issues. I was switching between IPv6 subnets, so I had my new subnet IP as a virtual IP while I was attempting to debug other connectivity issues.
#12
If I reboot the OPNsense host, the wireguard and SIP VOIP encounter issues with what appear to be stale or bad states in the firewall table. I can clearly see the states, delete them, and instantly restore connectivity. This only happens after a reboot.

My wild guess is that the stuck states are there from connection attempts during OPNsense boot. For some reason, the firewall still sees the state as valid, but neither WG or my SIP VOIP work.

No other services have issues.

Thoughts?
#13
Since I switched to opnsense I've had issues with my WireGuard tunnels. I connect several tunnels from a host on my LAN to a few servers on the public internet. It seems these tunnels go stale, and stop passing traffic after a while. I have a 10 second keep alive, but that doesn't seem to keep the tunnel open. Searching for the destination IP in my firewall state table and deleting the states allows the connection to resume.

Is there any state checking I can implement to keep this from happening? I'm using "port forward" to implement DNAT as I want to redirect these IPs for everything BUT SSH and WG UDP.

What can cause the firewall state to stop forwarding traffic and prevent opening a new connection?
#14
Google produced this result:

https://forum.opnsense.org/index.php?topic=23747.msg113055

and it seems there are no "custom options" for Unbound in the GUI. Am I not looking hard enough? How can I configure this in a supported fashion.
#15
General Discussion / slow first connection with IPv6
February 23, 2023, 04:27:57 PM
Since I switched to opnsense a week ago, I've noticed that new IPv6 connections are delayed from one Linux host to another (within my subnet or outside...see below). After the first connection (e.g. ping6 host) then connections are instantaneous until I leave it idle for a few minutes. Windows clients don't seem to have this issue.

At first I thought this was a DNS issue, so I added an IPv6 entry in /etc/hosts on my client PC, but it didn't make a difference. This is response highlights the issue...the delay is long enough that the host initially doesn't believe a connected route exists, but then is able to continue:

[root@backup ~]# ping -6 dns.example.com
PING dns.example.com(dns.example.com (::244::10)) 56 data bytes
From backup.example.com (::10::30) icmp_seq=1 Destination unreachable: Address unreachable
64 bytes from dns.example.com (::244::10): icmp_seq=2 ttl=63 time=0.583 ms
64 bytes from dns.example.com (::244::10): icmp_seq=3 ttl=63 time=0.659 ms


This is highly repeatable if I wait just a few minutes between tests. Is there some dynamic routing in IPv6 that I can fix so that it isn't doing the discovery every few minutes? I had no such issues with my Ubiquiti config with the same prefix and tunnel (HE.net) end point.

I just noticed that ipv6.google.com exhibits the same (again from a Linux host) with opnsense as my gateway.

[root@backup ~]# ping ipv6.google.com
PING ipv6.google.com(yyz10s05-in-x0e.1e100.net (2607:f8b0:400b:80c::200e)) 56 data bytes
From backup.example.com (::10::30) icmp_seq=1 Destination unreachable: Address unreachable
64 bytes from yyz10s17-in-x0e.1e100.net (2607:f8b0:400b:80c::200e): icmp_seq=2 ttl=120 time=9.44 ms
64 bytes from yyz10s17-in-x0e.1e100.net (2607:f8b0:400b:80c::200e): icmp_seq=3 ttl=120 time=9.06 ms
64 bytes from yyz10s05-in-x0e.1e100.net (2607:f8b0:400b:80c::200e): icmp_seq=4 ttl=120 time=9.31 ms


one consequence is that ssh -6 often fails in a script as it sees the connection as a failure. ssh -4 always works fine.