Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Ricardo

#121
This is a dedicated hardware router (APU2), no virtualization is utilized (the router is neither a VM guest nor it runs as a VM host for other downlink VM guests).
#122
Hi opnsense folks!
I want to utilize the most of the RAM sitting in my router, and mostly idle (the dashboard says 600MB of the 4096MB is utilized, the rest looks not used.
I set up tmpfs for /var and /tmp, but thats only minimal size of files.
I use unbound DNS to cache records in memory, but thats also very minimal amount.
I use maltrail, but due to the current setup, it stores its files not under /var or /tmp,  but under /root, so its torturing the underlying ssd, and not the RAM.
I enabled netflow in the past, it consumed a significant amount of RAM, but the python scripts running in the background killed the already underpowered CPU as well, so I stopped it.
What other service(s) enabled would benefit from the plenty of available RAM, while keeping the CPU usage still low?
#123
Strange thing: since the 20.1.4 installed last week, it now accepts the set new password. Maltrail plugin is 1.5 and maltrail package is 0.17
#124
Hello @mimugmail

did you manage to check the password change process?
#125
Is there anybody else, who see similar symptoms under similar router config (TMPFS e.g.)?
#126
1) Maybe, I cannot say for sure, I use TMPFS on my main router for SSD write wear minimization.
2) I meant memory usage, not CPU usage.
#127
0) To be honest, I didnt manage to perform that simple-looking password change sofar. If I copy-paste a calculated SHA256 hash of a simple string (without spaces or ENTER etc.) I am not allowed to login to the maltrail GUI on ROUTERIP:8338 with that new password. The default password lets me in though.

1)
root@FW01:/var/log # pwd
/var/log
root@FW01:/var/log # ls -l maltrail
lrwxr-xr-x  1 root  wheel  22 Mar  6 20:00 maltrail -> /root/var/log/maltrail
root@FW01:/var/log #

root@FW01:/var/log # cd maltrail/
root@FW01:/var/log/maltrail # ls -l
total 1428
-rw-r--r--  1 root  wheel    2562 Feb  2 23:23 2020-02-02.log
-rw-r--r--  1 root  wheel   24497 Feb  3 20:50 2020-02-03.log
........
-rw-r--r--  1 root  wheel   27512 Apr  1 22:31 2020-04-01.log
-rw-r--r--  1 root  wheel   10968 Apr  2 22:17 2020-04-02.log
-rw-r--r--  1 root  wheel    3911 Apr  3 11:50 2020-04-03.log
-rw-rw-rw-  1 root  wheel     728 Apr  3 15:33 error.log
lrwxr-xr-x  1 root  wheel      22 Feb  2 06:26 maltrail -> /root/var/log/maltrail


2)
last pid: 24340;  load averages:  0.96,  0.87,  0.83                                                                                                                           up 27+18:31:48  15:32:24
68 processes:  2 running, 65 sleeping, 1 waiting
CPU 0:  4.7% user,  0.0% nice,  1.3% system,  2.4% interrupt, 91.7% idle
CPU 1:  9.9% user,  0.0% nice,  2.4% system,  0.0% interrupt, 87.7% idle
CPU 2: 10.2% user,  0.0% nice,  1.7% system,  0.3% interrupt, 87.7% idle
CPU 3:  8.5% user,  0.0% nice,  1.6% system,  0.2% interrupt, 89.8% idle
Mem: 205M Active, 1979M Inact, 995M Laundry, 547M Wired, 279M Buf, 192M Free
Swap:

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
   11 root          4 155 ki31     0K    64K CPU0    0 2318.7 355.99% [idle]
68642 root          3  26    0   799M   746M select  3  84.1H  13.17% python3 /usr/local/share/maltrail/sensor.py (python3.7)
34290 root          3  26    0   799M   751M select  1  84.1H  13.15% python3 /usr/local/share/maltrail/sensor.py (python3.7)
65996 root          3  26    0   799M   748M select  0  84.1H  13.15% python3 /usr/local/share/maltrail/sensor.py (python3.7)
   12 root         34 -56    -     0K   544K WAIT   -1 829:41   2.02% [intr]
8285 root          3  20    0  1128M  1104M select  3  23.1H   1.99% python3 /usr/local/share/maltrail/sensor.py (python3.7)
   15 root          1 -16    -     0K    16K pftm    3  27:25   0.08% [pf purge]
#128
Hello all,

tried to find answers for my questions on maltrail site (https://github.com/stamparm/maltrail ), but without success.

0) this is rather an improvement request: please make the password change for the admin maltrail account less painful, as it is currently via the main opnsense admin GUI

1) the maltrail creates their files under /.maltrail, and also writes to /root/var/log instead of /var. My /var and /tmp is on TMPFS to reduce the killing of the small SSD with constant log-related writes. Is there a plan to put maltrail pkg files under proper location, and utilize standard /var and /tmp for anything frequently written log files? I cannot really measure how much disk write traffic is generated to the rootfs due to maltrail writing their files there, MONIT most probably summarizes both true rootfs write traffic and tmpfs write traffic, so that can be misleading for me.

2) it seems memory usage has skyrocketed in the past days (uptime is currently around 1 month), even after I restarted the maltrail server service. Is there any way to see if the memory usage is "normal" or something is leaking memory / should I schedule a maintenance reboot of the whole router someday?

3) Can some maltrail threats marked manually to bypass, as those are false positives, and harmless? Due to the amount they are reported frequently and cause lot of noise.

In general, I am looking for some more in-depth tutorials, how to fine-tune maltrail. The official github page is talking about things from a different perspective, and dont help to solve the real-world questions one will ask about this software.
#129
sub
#130
20.1 Legacy Series / Re: Show log error
March 05, 2020, 01:33:33 PM
Thanks, it fixed the issue!
#131
20.1 Legacy Series / Show log error
March 05, 2020, 07:57:43 AM
Hello,
I tried to check the GENERAL log under Logging.

It stuck in "loading..." state. I checked BACKEND log, and I see the following:

configd.py: [a57cd679-67bc-4e18-b1f0-7973db58e4d9] Script action failed with Command '/usr/local/opnsense/scripts/systemhealth/queryLog.py --limit '-1' --offset '0' --filter '' --module 'core' --filename 'system'' returned non-zero exit status 1. at Traceback (most recent call last): File "/usr/local/opnsense/service/modules/processhandler.py", line 484, in execute stdout=output_stream, stderr=error_stream) File "/usr/local/lib/python3.7/subprocess.py", line 363, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '/usr/local/opnsense/scripts/systemhealth/queryLog.py --limit '-1' --offset '0' --filter '' --module 'core' --filename 'system'' returned non-zero exit status 1.

This looks chinese to me unfortunately. The /var/log/system.log dows exist, and contains valid log entries.
I had a power outage a week ago, the only thing I suspect that the filesystem may got damaged but not sure how to confirm this.
#132
20.1 Legacy Series / Re: Permanent VNSTAT database on MFS
February 09, 2020, 07:51:52 AM
I did change from UFS back to MFS (switched back to WAN  interface) and this time it seems somehow it saved the database file correctly. I will try to see after a couple of reboots it is still correct or not.
#133
20.1 Legacy Series / Re: Permanent VNSTAT database on MFS
February 09, 2020, 07:13:11 AM
Ok, it took some time to find a proper time for reboot, but here it is:

changing the listen interface from WAN/pppoe to the static LAN interface still broke the vnstat service to start. Trying to start the service manually also fails.
Reset-ing the vnstat database immediately started the service without hiccup.

Next test: switched from MFS to normal UFS for /var -> did fix the service startup issue, but database had to be reset, otherwise the stat pages showed an empty page. After the 1st reset, because it is no longer stored on MFS, the next full reboot was also successful, did not have to reset the database again.
#134
20.1 Legacy Series / Re: Permanent VNSTAT database on MFS
February 05, 2020, 01:01:16 PM
Will try today evening to switch over from WAN to LAN interface measurement. What is the next step if it seems working fine bound to the LAN?
#135
20.1 Legacy Series / Re: Permanent VNSTAT database on MFS
February 05, 2020, 11:31:33 AM
I will try. What should be the visible result? No service crash / preserve traffic amount across reboots?