Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - lostcontrol

#1
Quote from: lostcontrol on November 07, 2024, 10:18:10 PM
Not sure if this is 100% related but I'm also such with 24.7.5_3 because this is the last version that works with IPv6 for me. Somehow I never get a correct IPv6 gateway with > 24.7.5_3. I suspect it is related to some changes in dhcp6c since a newer version was released with 24.7.6. I didn't have time to look closer at the issue so far unfortunately.

I took some time to look into my issue. I reverted dhcp6c as proposed but this didn't change the problem for me. Doing some googling, I realized that the default gateway setting is not propagated via DHCPv6 but RA. I checked for RA using tcpdump and noticed that I got some from fe80::c28c:8802:a581:8b62 which was the gateway OPNsense used when my IPv6 connectivity was broken. Looking at the NDP I noticed that the corresponding MAC address was the one of... my Draytek DSL modem :o The modem is configured in bridge mode and somehow sent RA to the router!? RA from my ISP somehow didn't make it through with > 24.7.5_3!? Long story short, I disabled IPv6 on the Draytek modem, reverted to my 24.7.8 snapshot and tada... IPv6 works :)

I will be migrated to FTTH next month so this would have solved itself eventually but I learned a lot by debugging this.
#2
Not sure if this is 100% related but I'm also such with 24.7.5_3 because this is the last version that works with IPv6 for me. Somehow I never get a correct IPv6 gateway with > 24.7.5_3. I suspect it is related to some changes in dhcp6c since a newer version was released with 24.7.6. I didn't have time to look closer at the issue so far unfortunately.
#3
Should I open a ticket on Github for this? Thank you

Cyril
#4
Sorry for not answering earlier. I was pretty busy.

The weird thing is that like vmstat, top does not show more activity than before 24.7.2. Here is the output of top with a refresh rate of 60 seconds:


last pid:   674;  load averages:  0.52,  0.47,  0.43                                            up 4+02:18:50  21:08:22
91 processes:  1 running, 90 sleeping
CPU:  9.8% user,  0.0% nice,  3.9% system,  0.1% interrupt, 86.3% idle
Mem: 78M Active, 538M Inact, 2122M Wired, 104K Buf, 998M Free
ARC: 1580M Total, 847M MFU, 557M MRU, 9363K Anon, 22M Header, 142M Other
     1281M Compressed, 3081M Uncompressed, 2.41:1 Ratio
Swap: 8192M Total, 8192M Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
49492 root          1  20    0    51M    38M nanslp   0  51:56  11.08% python3.11
19870 root          4  24    0   201M   147M kqread   1  30:53   2.32% python3.11
26229 root          1  20    0    79M    52M nanslp   1  50:16   0.86% php
16926 unbound       2  20    0   101M    76M kqread   1   1:34   0.21% unbound
94552 root          1  20    0    12M  2236K select   1   3:16   0.04% udpbroadcastrelay
87910 _flowd        1  20    0    12M  2716K select   0   1:23   0.03% flowd
96971 root          1  20    0    12M  2284K select   0   1:11   0.02% powerd
48469 root          4  68    0    13M  2680K uwait    0   0:10   0.02% dpinger
30572 root          7  20    0    45M    22M select   0   0:35   0.02% kea-dhcp4
57112 root          1  20    0    23M  8176K select   0   0:10   0.02% ntpd
94018 root          1  20    0    26M    14M select   1   0:14   0.02% python3.11
94113 root          1  20    0    27M    15M select   1   0:14   0.01% python3.11
6027 root          1  20    0    19M  9012K select   1   0:00   0.01% sshd-session
26673 root          4  68    0    13M  2644K uwait    0   0:46   0.01% dpinger
45218 nobody        1  20    0    12M  2176K sbwait   0   0:31   0.01% samplicate
23713 root          2  20    0    46M    15M kqread   0   0:21   0.01% syslog-ng
58815 root          1  20    0    12M  2348K select   0   0:02   0.01% igmpproxy
82211 root          1  20    0    14M  3748K CPU0     0   0:00   0.00% top
44911 dhcpd         1  20    0    24M    12M select   1   0:03   0.00% dhcpd
59765 root          2  20    0    22M    11M nanslp   1   0:11   0.00% monit
97725 root          1  20    0    22M    11M kqread   0   0:05   0.00% lighttpd


In my case, I see a slight increase of CPU temperature. I checked my states and indeed, they went up after 24.7.2 and slightly down again with 24.7.3 but still higher than with 24.7.0 and 24.7.1 (see attached screenshot)!?

I have 165115 aliases, most from bogon networks IPv6 (internal). I didn't change this between 24.1 and 24.7. I don't remember the amount with 24.1 though.

I use both IPv4 and IPv6. I didn't notice any issue with IPv6 except that I must reload manually my WAN interface to get the correct IPv6 gateway!? This was working fine before 24.7.

I disabled IDS because I thought it might be causing the high CPU load but this did not change anything.

But I think I just found the problem :) I look at /var/db/rrd/updaterrd.sh and I see that the CPU stats come from cpustats. I ran that program in a loop and clearly, that application is just reporting the current CPU load. For RRD, this will not work as it will depend on what is running at the time of the collection and does not consider what happened during the last minute. vmstat 60 or top -s 60 show the correct behavior.


root@opnsense:~ # sh -c "while [ true ] ; do cpustats ; sleep 2 ; done"
1.6:0.0:2.3:0.0:96.1
49.6:0.0:2.0:0.0:48.4
0.0:0.0:1.6:0.0:98.4
0.0:0.0:1.6:0.4:98.0
6.7:0.0:2.7:0.4:90.2
0.8:0.0:2.0:0.0:97.2
0.0:0.0:2.8:0.0:97.2
0.8:0.0:5.9:0.0:93.3
0.0:0.0:1.9:0.0:98.1
0.8:0.0:4.7:0.0:94.5
1.5:0.0:5.8:0.0:92.7
12.5:0.0:4.3:0.4:82.8
0.4:0.0:1.2:0.0:98.4
0.4:0.0:3.9:0.0:95.7
0.4:0.0:6.3:0.0:93.4
0.0:0.0:1.5:0.0:98.5
0.8:0.0:3.0:0.0:96.2
21.3:0.0:18.9:0.0:59.8
41.5:0.0:20.2:0.0:38.4
0.4:0.0:1.6:0.0:98.0
0.8:0.0:1.6:0.0:97.7
11.2:0.0:4.2:0.0:84.6
49.2:0.0:2.7:0.4:47.7
12.9:0.0:6.6:0.4:80.1
0.8:0.0:1.6:0.0:97.7
0.0:0.0:3.1:0.0:96.9
0.4:0.0:3.1:0.0:96.5
0.4:0.0:4.3:0.0:95.3
13.3:0.0:3.5:0.0:83.2



root@opnsense:~ # sh -c "while [ true ] ; do cpustats ; sleep 30 ; done"
44.5:0.0:8.7:0.0:46.9
0.4:0.0:2.7:0.0:96.9
0.4:0.0:1.1:0.0:98.5
0.4:0.0:1.9:0.4:97.3
0.4:0.0:1.6:0.0:98.0
45.3:0.0:3.5:0.0:51.2
0.0:0.0:2.0:0.0:98.0


I don't know how it was done before moving this script to be called by a cronjob but something was different before.
#5
Hi all,

I tried my luck on Reddit but got no answer so I'm posting here.

Since upgrading to 24.7.2, I see weird values (abnormally high) for CPU utilization. The previous releases seemed to be fine in this regard (but the RRD collection was broken).

I see ~40% constant usage now (see screenshot in attachment) whereas vmstat gives me less than 20!?

root@opnsense:~ # vmstat 60
procs    memory    page                      disks       faults       cpu
r  b  w  avm  fre  flt  re  pi  po   fr   sr ada0 pas0   in   sy   cs us sy id
0  0 39 518G 504M 2.3k   4   0   1 2.0k  122    0    0  239 1.1k  927  9  3 87
0  0 39 518G 503M 2.3k   0   0   1 2.0k  171   10    0  306 1.1k 1.1k 10  3 86
1  0 39 518G 502M 2.2k   0   0   1 2.0k  171    7    0  225 1.0k  979 11  2 85
0  0 39 518G 502M 2.2k   0   0   1 2.0k  171   10    0  236 1.0k 1.0k 10  3 86
0  0 39 518G 502M 2.2k   0   0   1 2.0k  171    7    0  986 1.0k 2.6k 11  3 84
0  0 39 518G 502M 2.3k   0   0   1 2.0k  171   10    0  315 1.1k 1.1k 11  3 85
0  0 39 518G 501M 2.2k   0   0   1 2.0k  171    7    0  229 1.0k  969 11  2 85
0  0 39 518G 501M 2.3k   0   0   1 2.0k  171   11    0  309 1.1k 1.1k 11  2 85
1  0 39 518G 500M 2.2k   0   0   1 2.0k  171    8    0  379 1.2k 1.2k 13  3 83
2  0 39 518G 499M 2.6k   0   0   1 2.3k  172   10    0  335 1.3k 1.3k 13  3 83
1  0 39 518G 490M 2.4k   0   0   1 2.2k  173    6    0  489 1.1k 1.3k  8  3 87
0  0 39 518G 490M 2.2k   0   0   1 2.0k  172   10    0  284 1.0k 1.1k  9  2 87


Anyone noticed a similar behavior?

Thank you.

EDIT: Here is the Reddit post