Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - e97

#1
Quote from: ThyOnlySandman on June 06, 2024, 12:52:14 AM
This what I set

System , Settings, General - DNS servers blank
Disable -  Allow DNS server list to be overridden by DHCP/PPP on WAN
Disable -  Do not use the local DNS service as a nameserver for this system

Unbound on LAN INT listening port 53
LAN firewall rules , source internal vlans to destination (this firewall) port 53
Unbound access lists allowing internal vlans

Unbound - DNS over TLS

8.8.8.8
853
dns.google.com

1.1.1.1
853
cloudflare-dns.com

Clients DNS set to opnsense DNS.  Or if internal DNS servers like domain controllers, client's DNS set to DC.  DC forwards set to opnsense 53.  Internal DNS unencrypted 53.  External queries over TLS 853 to ones you specify.

Thank you! That seems to have fixed the issue and now I got a bit of a security upgrade with DNS over TLS  ;D
#2
Hello recently upgraded from 23.x to 24.x. Currently on OPNsense 24.1.6-amd64

Previously setup my DNS using a few different servers (1.1.1.1, 8.8.8.8, 9.9.9.9 lol) and verified with dig https://www.dnscheck.tools/ and https://www.dnsleaktest.com/

I followed the instructions here https://forum.opnsense.org/index.php?topic=8505.0

Have unbound Enabled.

Recently I noticed a slow down in browsing and diagnosed the issue to using ISP DNS (provided by DHCP) instead of the servers I specified.

For OPNsense 24.x Whats the correct procedure / settings to use specified DNS instead of ISP DNS provided by DHCP on WAN?
#3
Same error when trying to upgrade

Invoking upgrade script 'unbound-duckdb.py'
Abort trap (core dumped)
>>> Error in upgrade script '20-unbound-duckdb.py'


"Reset DNS data" from

Reporting > Settings > Unbound DNS reporting - "Reset DNS data"

Ran upgrade again and it worked.
#4
Disk is filling up with netflow logs from local collector. Only want netflow for the past X days.

I've since turned off netflow to avoid disk full.

There is an old thread (20.7) about this with no response: https://forum.opnsense.org/index.php?topic=20262.msg93861

Is this possible in 23.7?

If not, this is my implementation idea I'd like feedback on:

Since the logs are stored in sqlite, use python3 to connect to sqlite db and delete rows older than X days.

This functionality can be added to flowd_aggregate and exposed on the Reporting > Settings page in the webUI and only apply to local collector.

..and where is the code for the local collector?
#5
23.7 Legacy Series / Re: OPNsense runs out of space
December 19, 2023, 01:03:58 AM
@m4rtin did you resolve this?

im having a similar issue see https://forum.opnsense.org/index.php?topic=37633.0

i have since tried limiting logs to 5 days and turned off local capture of netflow

#6
Quote from: doktornotor on December 18, 2023, 07:04:07 PM
Sounds like an excellent idea. At least it doesn't crash after every power failure. Also the filesystem is lz4-compressed by default, if you are space-constrained.

Most of the guides say either ufs or zfs is fine. The installer even defaults to ufs. Outside of the lz4-compression, I dont think the filesystem choice should make much of a difference.

Re: power loss corruption, I recently had a few power outages due to storms and none of my systems had any issues. Including this opnsense system with ufs. The drives in the systems dont have Power Loss Protection (PLP) but I use SSDs with well reviewed controllers.

The only power loss filesystem corruption I've had in recent years, was a low quality sd card in a raspberry pi which consistently wouldnt boot after 2-3 power loss events. After changing the sd card to a name brand one, I havent had an issue with power loss corruption.

Another data point is at the same time pi #1 with the cheap sd card got corrupted and re-flashed multiple times, another raspberry pi system with a photography grade sd card has been running for multiple years without issue.
#7
I'm using UFS - maybe I should re-do this using ZFS?

mount
# mount
/dev/gpt/rootfs on / (ufs, local, soft-updates, journaled soft-updates)
devfs on /dev (devfs)
/dev/gpt/efifs on /boot/efi (msdosfs, local)
tmpfs on /var/log (tmpfs, local)
tmpfs on /tmp (tmpfs, local)
devfs on /var/dhcpd/dev (devfs)
devfs on /var/unbound/dev (devfs)
/usr/local/lib/python3.9 on /var/unbound/usr/local/lib/python3.9 (nullfs, local, read-only, soft-updates, journaled soft-updates)


Not swap -- see attached screenshot - 0% of 8GB used.

I had issues with flowd / netflow logs becoming full. flowd_aggregator crashed and I tried to run it manually but it hung so I reset the netflow data using the Web UI (Reporting > Settings > "Reset Netflow data")

I found a thread (https://www.reddit.com/r/OPNsenseFirewall/comments/qmsk9e/disk_usage_growing_cant_find_it/) that said it might be due to open file handles from the crashed flowd_aggregator and I can check with lsof | grep '(deleted)' but opnsense doesnt have lsof (at least not in plugins or packages) and I'm not sure how to install it.

I previously had the opnsense lock up due to out of disk space. Recent kept logs is at 7 days. I only want logs for debugging any recent issues -- not long term so I have ram disk 16GB (can bump to 32GB if necessary)

I sym-linked /var/netflow to ramdisk on /var/log to avoid this and it seems to be working but filesize keeps growing

2023-12-18T03:40:16-05:00 Notice flowd_aggregate.py vacuum done
2023-12-18T03:40:15-05:00 Notice flowd_aggregate.py vacuum interface_086400.sqlite
2023-12-18T03:40:15-05:00 Notice flowd_aggregate.py vacuum interface_003600.sqlite
2023-12-18T03:40:15-05:00 Notice flowd_aggregate.py vacuum interface_000300.sqlite
2023-12-18T03:40:15-05:00 Notice flowd_aggregate.py vacuum interface_000030.sqlite
2023-12-18T03:40:15-05:00 Notice flowd_aggregate.py vacuum dst_port_086400.sqlite
2023-12-18T03:40:13-05:00 Notice flowd_aggregate.py vacuum dst_port_003600.sqlite
2023-12-18T03:40:09-05:00 Notice flowd_aggregate.py vacuum dst_port_000300.sqlite
2023-12-18T03:40:09-05:00 Notice flowd_aggregate.py vacuum src_addr_086400.sqlite
2023-12-18T03:40:09-05:00 Notice flowd_aggregate.py vacuum src_addr_003600.sqlite
2023-12-18T03:40:08-05:00 Notice flowd_aggregate.py vacuum src_addr_000300.sqlite
2023-12-18T03:39:45-05:00 Notice flowd_aggregate.py vacuum src_addr_details_086400.sqlite
2023-12-17T19:37:41-05:00 Notice flowd_aggregate.py vacuum done
2023-12-17T19:37:41-05:00 Notice flowd_aggregate.py start watching flowd
2023-12-17T19:37:41-05:00 Notice flowd_aggregate.py startup, check database.


latest df

# df -h
Filesystem                  Size    Used   Avail Capacity  Mounted on
/dev/gpt/rootfs              31G     16G     13G    56%    /
devfs                       1.0K    1.0K      0B   100%    /dev
/dev/gpt/efifs              256M    1.7M    254M     1%    /boot/efi
tmpfs                       8.0G    4.4G    3.6G    55%    /var/log
tmpfs                       8.0G    1.9M    8.0G     0%    /tmp
devfs                       1.0K    1.0K      0B   100%    /var/dhcpd/dev
devfs                       1.0K    1.0K      0B   100%    /var/unbound/dev
/usr/local/lib/python3.9     31G     16G     13G    56%    /var/unbound/usr/local/lib/python3.9


latest du
/ # du -chs *
8.0K COPYRIGHT
1.4M bin
340M boot
7.6M conf
4.0K dev
4.0K entropy
3.5M etc
4.0K home
14M lib
160K libexec
4.0K media
4.0K mnt
4.0K net
4.0K proc
4.0K rescue
48K root
5.8M sbin
  0B sys
1.9M tmp
1.4G usr
4.8G var
6.6G total


find largest files
/ # find /var/ -type f -exec du -Ah {} + | sort -h | tail -30
2.3M /var/unbound/usr/local/lib/python3.9/site-packages/netaddr/eui/iab.txt
2.7M /var/unbound/usr/local/lib/python3.9/site-packages/cryptography/hazmat/bindings/_rust.abi3.so
3.8M /var/log/flowd.log
3.9M /var/log/configd/configd_20231218.log
5.3M /var/unbound/usr/local/lib/python3.9/site-packages/netaddr/eui/oui.txt
5.7M /var/unbound/usr/local/lib/python3.9/site-packages/numpy/core/_multiarray_umath.cpython-39.so
6.1M /var/log/netflow/src_addr_086400.sqlite
6.1M /var/unbound/usr/local/lib/python3.9/config-3.9/libpython3.9.a
7.5M /var/db/pkg/vuln.xml
8.3M /var/unbound/data/unbound.duckdb
12M /var/log/flowd.log.000002
12M /var/log/flowd.log.000005
12M /var/log/flowd.log.000008
12M /var/log/flowd.log.000010
13M /var/log/flowd.log.000003
13M /var/log/flowd.log.000004
13M /var/log/flowd.log.000006
13M /var/log/flowd.log.000007
13M /var/log/flowd.log.000009
14M /var/db/pkg/local.sqlite
15M /var/log/flowd.log.000001
25M /var/log/netflow/dst_port_086400.sqlite
29M /var/log/netflow/src_addr_003600.sqlite
43M /var/unbound/usr/local/lib/python3.9/site-packages/duckdb.cpython-39.so
64M /var/log/netflow/src_addr_000300.sqlite
226M /var/log/netflow/dst_port_003600.sqlite
254M /var/log/netflow/dst_port_000300.sqlite
301M /var/log/filter/filter_20231217.log
684M /var/log/filter/filter_20231218.log
2.7G /var/log/netflow/src_addr_details_086400.sqlite

#8
23.7 Legacy Series / high disk usage, du and df differ
December 18, 2023, 05:07:10 AM
Hello,
I've been running into issues with diskspace - 32GB.

# df -h
Filesystem                  Size    Used   Avail Capacity  Mounted on
/dev/gpt/rootfs              31G     16G     13G    56%    /
devfs                       1.0K    1.0K      0B   100%    /dev
/dev/gpt/efifs              256M    1.7M    254M     1%    /boot/efi
tmpfs                       8.0G    1.2G    6.8G    15%    /var/log
tmpfs                       8.0G    1.5M    8.0G     0%    /tmp
devfs                       1.0K    1.0K      0B   100%    /var/dhcpd/dev
devfs                       1.0K    1.0K      0B   100%    /var/unbound/dev
/usr/local/lib/python3.9     31G     16G     13G    56%    /var/unbound/usr/local/lib/python3.9


root@OPNsense:/ # du -chs *
8.0K COPYRIGHT
1.4M bin
340M boot
7.6M conf
4.0K dev
4.0K entropy
3.5M etc
4.0K home
14M lib
160K libexec
4.0K media
4.0K mnt
4.0K net
4.0K proc
4.0K rescue
44K root
5.8M sbin
  0B sys
1.5M tmp
1.4G usr
1.7G var
3.5G total


GUI shows 16G used.

16G vs 3.5G what am I missing?