Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - craig

#1
I believe so - I've had a search and I can't find any links to firmware or docs on how to update.
#2
I experienced one crash on 24.7 since installing my GPON-ONU-34-20BI from fs.com, but they're far more frequent on 25.1.

textdump.tar.0
ddb.txt06000014000014735515011  7074 ustarrootwheeldb:0:kdb.enter.default>  run lockinfo
db:1:lockinfo> show locks
No such command; use "help" to list available commands
db:1:lockinfo>  show alllocks
No such command; use "help" to list available commands
db:1:lockinfo>  show lockedvnods
Locked vnodes
db:0:kdb.enter.default>  show pcpu
cpuid        = 2
dynamic pcpu = 0xfffffe00b6a71080
curthread    = 0xfffff80001e78000: pid 7 tid 100165 critnest 1 "pf purge"
curpcb       = 0xfffff80001e78520
fpcurthread  = none
idlethread   = 0xfffff80001ae0740: tid 100005 "idle: cpu2"
self         = 0xffffffff82c12000
curpmap      = 0xffffffff81b81670
tssp         = 0xffffffff82c12384
rsp0         = 0xfffffe0103867000
kcr3         = 0xffffffffffffffff
ucr3         = 0xffffffffffffffff
scr3         = 0x0
gs32p        = 0xffffffff82c12404
ldt          = 0xffffffff82c12444
tss          = 0xffffffff82c12434
curvnet      = 0xfffff800012a8a80
db:0:kdb.enter.default>  bt
Tracing pid 7 tid 100165 td 0xfffff80001e78000
kdb_enter() at kdb_enter+0x33/frame 0xfffffe0103866c20
panic() at panic+0x43/frame 0xfffffe0103866c80
trap_fatal() at trap_fatal+0x40b/frame 0xfffffe0103866ce0
trap_pfault() at trap_pfault+0x46/frame 0xfffffe0103866d30
calltrap() at calltrap+0x8/frame 0xfffffe0103866d30
--- trap 0xc, rip = 0xffffffff8216ad9c, rsp = 0xfffffe0103866e00, rbp = 0xfffffe0103866e30 ---
pf_detach_state() at pf_detach_state+0x5fc/frame 0xfffffe0103866e30
pf_unlink_state() at pf_unlink_state+0x290/frame 0xfffffe0103866e70
pf_purge_expired_states() at pf_purge_expired_states+0x188/frame 0xfffffe0103866ec0
pf_purge_thread() at pf_purge_thread+0x13b/frame 0xfffffe0103866ef0
fork_exit() at fork_exit+0x7f/frame 0xfffffe0103866f30
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0103866f30
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
#3
I've been getting much lower speeds when running speedtest on my DEC2750 vs a Linux VM (ultimately connected to the DEC2750).

Included speedtests from each below, was wondering if anyone else had experienced this, and what was the cause?

Linux VM
speedtest --server-id 62845

   Speedtest by Ookla

      Server: Netcalibre - London (id: 62845)
         ISP: Netcalibre
Idle Latency:    14.07 ms   (jitter: 0.11ms, low: 13.98ms, high: 14.26ms)
    Download:   764.81 Mbps (data used: 1.3 GB)
                 13.28 ms   (jitter: 0.56ms, low: 12.38ms, high: 20.27ms)
      Upload:   107.42 Mbps (data used: 53.8 MB)
                 13.97 ms   (jitter: 0.36ms, low: 12.78ms, high: 15.01ms)
Packet Loss:     0.0%
  Result URL: https://www.speedtest.net/result/c/5bdd45ce-6b89-4767-bd4b-67d3bb8563ab



DEC2750
speedtest --server-id 62845

   Speedtest by Ookla

      Server: Netcalibre - London (id: 62845)
         ISP: Netcalibre
Idle Latency:    14.02 ms   (jitter: 0.14ms, low: 13.91ms, high: 14.17ms)
    Download:   295.77 Mbps (data used: 493.8 MB)
                 13.07 ms   (jitter: 1.86ms, low: 12.15ms, high: 232.27ms)
      Upload:   105.77 Mbps (data used: 101.0 MB)
                 13.52 ms   (jitter: 0.57ms, low: 12.11ms, high: 16.02ms)
Packet Loss:     0.0%
  Result URL: https://www.speedtest.net/result/c/10093fda-0d10-43fb-841e-160a16f54056
#4
I am still experiencing this issue, it's becoming incredibly annoying.
#5
This is also still happening for me - I've been working through disabling functionality (shaper, jumbo frames etc) to try and figure it out, but it's a slow process.

It does look like IPv6 is going to be my next target though, as `ip6_tryforward()` is mentioned in the trace.

Fatal trap 12: page fault while in kernel mode
cpuid = 6; apic id = 06
fault virtual address = 0x10
fault code = supervisor read data, page not present
instruction pointer = 0x20:0xffffffff80ea3764
stack pointer         = 0x28:0xfffffe00e013eca0

frame pointer         = 0x28:0xfffffe00e013ed10

Fatal trap 12: page fault while in kernel mode
cpuid = 5; code segment = base 0x0, limit 0xfffff, type 0x1b
apic id = 05
fault virtual address = 0x10
fault code = supervisor read data, page not present
= DPL 0, pres 1, long 1, def32 0, gran 1
instruction pointer = 0x20:0xffffffff80ea3764
processor eflags = interrupt enabled, resume, stack pointer         = 0x28:0xfffffe00e0143ca0
IOPL = 0
current process = 12 (swi1: netisr 6)
trap number = 12
frame pointer         = 0x28:0xfffffe00e0143d10
code segment = base 0x0, limit 0xfffff, type 0x1b
panic: page fault
cpuid = 6
time = 1700089902
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00e013ea60
vpanic() at vpanic+0x151/frame 0xfffffe00e013eab0
panic() at panic+0x43/frame 0xfffffe00e013eb10
trap_fatal() at trap_fatal+0x387/frame 0xfffffe00e013eb70
trap_pfault() at trap_pfault+0x4f/frame 0xfffffe00e013ebd0
calltrap() at calltrap+0x8/frame 0xfffffe00e013ebd0
--- trap 0xc, rip = 0xffffffff80ea3764, rsp = 0xfffffe00e013eca0, rbp = 0xfffffe00e013ed10 ---
ip6_tryforward() at ip6_tryforward+0x274/frame 0xfffffe00e013ed10
ip6_input() at ip6_input+0x5e4/frame 0xfffffe00e013edf0
swi_net() at swi_net+0x12b/frame 0xfffffe00e013ee60
ithread_loop() at ithread_loop+0x25a/frame 0xfffffe00e013eef0
fork_exit() at fork_exit+0x7e/frame 0xfffffe00e013ef30
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe00e013ef30
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
KDB: enter: panic
#6
I do - I backed up the entire folder

Dump header from device: /dev/gpt/swapfs
  Architecture: amd64
  Architecture Version: 2
  Dump Length: 1956237312
  Blocksize: 512
  Compression: none
  Dumptime: 2023-10-31 10:41:57 +0000
  Hostname: OPNsense.home
  Magic: FreeBSD Kernel Dump
  Version String: FreeBSD 13.2-RELEASE-p3 stable/23.7-n254818-f155405f505 SMP
  Panic String: page fault
  Dump Parity: 2194897932
  Bounds: 0
  Dump Status: good
#7
I've popped it on WeTransfer - https://we.tl/t-QYw1eSa4pj let me know if there's any problems.
#8
I have just had a PPPoE crash (typical), and do have a 1.96GB vmcore.0 crash file from the "production kernel" if it would help?
#9
Sorry I've been away for a few days.

I installed the debug kernel last night, but after doing OPNSense panics on boot.

I had to get things back up and running, so used the console port to select the previous kernel - I'll try and get the panic this evening to see if we can work around it.
#10
Yes, I am using IPv6  :)

edit: I've uploaded the textdump file to my original post
#11
Sometimes I need to repeatedly reconnect my PPPoE connection as my ISP doesn't properly weight their gateways and I end up on one the other side of the country.

Recently (I think since 23.7), when I do this, after a few times OPNsense completely locks up and restarts, I've submitted a few crash reports but wanted to check if anyone else here is able to reproduce?

I have the crash log which I can upload if it'd help anyone (and is there anything other than IPs to remove from the logs?)
#12
I was in the process digging through a manual SNMP walk :)

Looks like the data is correctly returned from opnsense - I assumed it was dodgy as it was resetting values I had previously set in librenms.

It looks like there was a bug in librenms (https://github.com/librenms/librenms/pull/15238) which has now been fixed, so I've updated and it's resolved my issue.

Thanks for the quick reply @Monviech!
#13
I can't remember when this started happening, but the speed of all my ports is being reported as 1bps via SNMP into LibreNMS.

I have tried manually changing it in LibreNMS - but when the device is re-polled it resets changes back to 1bps.

- OPNsense 23.7.5-amd64
- os-net-snmp version: 1.5_2
- SNMP is configured via V3

Has anyone else experienced this? or have any pointers on what I can do to fix?
#14
23.1 Legacy Series / Re: Unbound logging/cpu usage
January 31, 2023, 10:11:50 PM
I have also seen quite a steady increase in CPU usage with htop showing the logger as the culprit