Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - fabianodelg

#1
I'm running the latest OPNsense on my APU4 with various optimisation (RSS and hardware offloading).

All in all I'm quite satisfied considering my current ISP speed (500 Mbit) and the 12 VLANs (I don't route traffic between them though).

I'm aware that the APU4 is not a super powerful but, with other Firewall OS (ie OpenWRT) it does route to gigabit between VLANs with zero effort and pretty much null CPU usage.

I've noticed that most of the time, processing hogging my APU4 with OPNsense are Python processes; I'm wondering if the graphical interface (and everything related to it) is actually stealing the most of the cpu power to route packets...

Hence my 'nice to have': what about having an OPNsense 'light' with minimal UI (and functionality) but pretty much focussed to packet routing, firewalling, VLANs, QoS etc? A sort of minimal distro...

what's your thoughts?

Thanks
F
#2
Hi everyone,

there's quite few posts in regard the INSPECT function (which allow to see when a firewall rule has been executed as well as how many bytes that specific rule is 'consuming' from your network).

What's not clear to me is that the counters are zeroed (I believe every 24h) by some process (cron?) while I'd like that the counters are NOT zeroed.

I've read that this is due to the scheduler being active (as there may be some firewall rules scheduled for the execution) but that's not my case, I have nothing in the scheduler section (and of course no scheduled firewall rules)

In the crontab for the user root I can see these jobs:

#minute   hour   mday   month   wday   command
1   *   *   *   *   (/usr/local/sbin/configctl -d syslog archive) > /dev/null
2   *   *   *   *   (/usr/local/sbin/expiretable -v -t 3600 sshlockout) > /dev/null
3   *   *   *   *   (/usr/local/sbin/expiretable -v -t 3600 virusprot) > /dev/null
4   *   *   *   *   (/usr/local/etc/rc.expireaccounts) > /dev/null
*/4   *   *   *   *   (/usr/local/sbin/ping_hosts.sh) > /dev/null
0   22   *   *   *   (/usr/local/sbin/configctl -d firmware changelog cron) > /dev/null
0   1   *   *   *   (/usr/local/sbin/configctl -d system remote backup) > /dev/null
1   3   1   *   *   (/usr/local/sbin/configctl -d filter schedule bogons) > /dev/null
*   *   *   *   *   (/usr/local/bin/flock -n -E 0 -o /tmp/filter_update_tables.lock /usr/local/opnsense/scripts/filter/update_tables.py) > /dev/null

while for the user nobody (which I believe is used by the UI):

# DO NOT EDIT THIS FILE -- OPNsense auto-generated file
#
# User-defined crontab files can be loaded via /etc/cron.d
# or /usr/local/etc/cron.d and follow the same format as
# /etc/crontab, see the crontab(5) manual page.
SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
#minute   hour   mday   month   wday   command

(this confirms that I have no schedule configured).

Can any of the jobs scheduled for the user 'root' be the cause of the counters being zeroed on a regular basis?

If not what other area should I look for?

Thanks in advance for any answer and help on this matter!
#3
Zenarmor (Sensei) / Zenarmor report bug?
April 08, 2022, 12:33:49 PM
Hi there,

I'm on ZenArmor 1.11 + OPNsense 22.1.4_1; since the update to ZenArmor 1.11, if I browse the report section look to a 'session detail' the SRC IP and SRC HOSTNAME seems to be a like a random Class A IP address while the Mac Address is of a valid (at least one that I recognise!) device in my network.

When I first saw this I nearly had an heart attack :) as I thought that my network has been compromised; looking more in detail, the behaviour is consistent in all the reporting section (so SRC IP is always a random Class A IP address but the Mac address are valid and belonging to my devices).

I've also double checked on my firewall with arp -a and the arp cache is consistent.

Is this something anyone else is experiencing?

Thanks
F
#4
I'm using Sensei since a while and I'm very happy about the product capabilities.

What I'm not so happy with, it's how Sensei deal with tripadvisor: the page look all broken and pieces and not really enjoyable.

The only filter I'm applying is only regarding the Ads and in App control, tripadvisor is enable.

I also whitelisted the following domain:

1   e10952.b.akamaiedge.net      
2   edge.tacdn.com      
3   edgekey.net      
4   policy.www.tripadvisor.com.edge.tacdn.com      
5   tripadvisor.com      
6   tripadvisor.com.edge.tacdn.com      
7   www.tripadvisor.com.edgekey.net

with no joy. What else should I do to get Tripadvisor back on my laptop?

Thank you for your help, much appreciated!
#5
Zenarmor (Sensei) / Sensei and bufferbloat
August 24, 2021, 11:19:39 PM
Hi all

since I installed Sensei on my APU2 I did notice that I started to experience quite a serious bufferbloat; I did few test and I could see that if I would execute a speedtest while pinging 1.1.1.1 I could see that there is a very high latency (>300ms) and, packet loss (random) -I have a 100/10 wan link-

DSLreport, is giving me an F http://www.dslreports.com/speedtest/69317047

I have FQ_Codel implemented but it seems that there's no too much that can be done to resolve.

Switching Sensei to passive mode, the ping latency disappear (during the speedtest, <30ms) and DSLreport, now give me an A.

Is this a case of too underpowered hardware or there's any tweak that can be done to improve the Sensei performance?

My APU2 run on a AMD GX-412TC SOC (4 cores) with 4 GB RAM; the BIOS has been upgraded to allow 1.4 GHz

Thanks
F
#6
Zenarmor (Sensei) / Eastpect and file system full
August 10, 2021, 11:08:48 AM
Hi everyone,

this morning, during my usual checks I find out that I was unable to add any record to a whitelist for a specific policy.

I ssh'ed into my OPNsense and a dmesg showed the following:

pid 998 (eastpect), uid 0 inumber 29 on /usr/local/sensei/output/active/temp: filesystem full (repeated a number of times).

The output of a df gave me:

root@Router:/usr/local/sensei/output/active/temp # df
Filesystem      1K-blocks    Used    Avail Capacity  Mounted on
/dev/gpt/rootfs  47628560 5021924 38796352    11%    /
devfs                   1       1        0   100%    /dev
devfs                   1       1        0   100%    /var/dhcpd/dev
devfs                   1       1        0   100%    /var/unbound/dev
/dev/md43           49180      48    45200     0%    /usr/local/sensei/output/active/temp
root@Router:/usr/local/sensei/output/active/temp #

while the content of /usr/local/sensei/output/active/temp show the following:

root@Router:/usr/local/sensei/output/active/temp # ls -l
total 64
drwxrwxr-x  2 root  operator    512 Aug  6 10:29 .snap
-rw-------  1 root  wheel      4083 Aug 10 10:06 0_alert_46.ipdr
-rw-------  1 root  wheel      8175 Aug 10 10:06 0_conn_45.ipdr.ready
-rw-------  1 root  wheel      1356 Aug 10 10:06 0_conn_46.ipdr
-rw-------  1 root  wheel     35457 Aug 10 10:06 0_dns_15.ipdr
-rw-------  1 root  wheel      3190 Aug 10 10:06 0_http_47.ipdr
-rw-------  1 root  wheel      1256 Aug 10 10:06 0_tls_12.ipdr
root@Router:/usr/local/sensei/output/active/temp #

As you can see from below, the partition is 48M, allocated 44K and has 0% capacity left.

Did someone experience a similar behavior? While it doesn't seem that the filtering functionality -or reporting- are affected, the UI definitely is -at least when it comes to add IP address in a whitelist-.

Is this a bug?

Thanks

#7
Hi everyone,

I'd like to share a trick to solve one of the issue I had using sensei on my APU2.

My APU2 has the following configuration:

- AMD GX-412TC SOC (4 cores) (firmware updated to gain 1.4GHz)
- 4 GB RAM
- 60 GB SSD

Sensei marked my hardware as low end, proposing the installation of a local MongoDB or a remote ElasticSearch. To be honest, I have no will to install an ElasticSearch on a separated server (providing the necessary resilency and security) so MongoDB was the perfect answer.

Everything worked (is working) fine, but I did notice that since the last start up, the memory allocation was growing to a point that the system started to swap (with all the negative consequence of a system that is swapping out memory pages)

I did few research regarding the mongoDB tuning and I find out a parameter that needs to be set in the mongodb.conf config file to limit the amount of caching MongoDB would use;

Reading the MongoDB documentation:

"Memory Use
With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.

Starting in MongoDB 3.4, the default WiredTiger internal cache size is the larger of either:

50% of (RAM - 1 GB), or
256 MB.
For example, on a system with a total of 4GB of RAM the WiredTiger cache will use 1.5GB of RAM (0.5 * (4 GB - 1 GB) = 1.5 GB). Conversely, a system with a total of 1.25 GB of RAM will allocate 256 MB to the WiredTiger cache because that is more than half of the total RAM minus one gigabyte (0.5 * (1.25 GB - 1 GB) = 128 MB < 256 MB)."

In a system with 4 GB (and few other things running), 1.5 GB can be too much. Changing this value to as low as 0.5 (512MB) would not make any significant impact on performance (MongoDB will use the OS caching mechanism, regardless) but, it would keep the memory allocation well under control.

To change the setting, you should enable the SSH access to your OPNsense firewall and as root user, you should edit the /usr/local/etc/mongodb.conf as follow:


# Where and how to store data.
storage:
  dbPath: /usr/local/datastore/mongodb
  journal:
    enabled: true
#  engine:
#  mmapv1:
  wiredTiger:
    engineConfig:
        cacheSizeGB: 0.5

Feel free to experiment; in my case, as I don't run anything but sensei, I set that to 1 (1GB). Since then, my system is not swapping at all and everything works with no issue at all.

PS: Sensei team: what a great product.. I purchased an Home license to cover my 60 devices and I'm delighted about it!!! If only the number of policies could be raised to 5... (I did the survey :) )





#8
Hi everyone,

I'm new to the forum -which I found to be a great source of knowledge-.

I'm using OPNsense 21.7 on an Intel NUC (i7 + 32 GB RAM + 1 TB NVME, LAN interface is running on the NUC Intel 10/100/1000 while the WAN is an external USB 10/100/1000) and I'm running the following services:

- Netflow
- Sensei (latest build)
- uPnP
- Web Proxy + C-ICAP (squid is configured in transparent mode with the SSL part only logging the SNI information)

I'd like to configure the traffic shaper, to be able to assign different priorities to some of the devices in my network; I'm aware that all the traffic between the WAN and the LAN is managed by squid; with that in mind, how should I configure the shaper to be effective with Squid?

Thanks in advance for your help!