76
20.7 Legacy Series / Command prompt in webgui
« on: October 31, 2020, 11:46:33 am »
If I'm not mistaken in pfSense there's command prompt available in webgui. I can't find any plugin for it; is it available in OPNsense?
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
I just checked... This went in on Thu Nov 14 23:31:20 2019 +0000 but was never moved to stable/12. I am not sure if we will be seeing this before FreeBSD 13.Uuups, that would mean almost years to wait and not months... Do you know the reason behind such move? Most of Linux distributions have it already included.
Connecting to host 172.16.1.1, port 59242
[ 5] local 172.17.0.2 port 36110 connected to 172.16.1.1 port 59242
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 111 MBytes 932 Mbits/sec 15 704 KBytes
[ 5] 1.00-2.00 sec 109 MBytes 913 Mbits/sec 0 816 KBytes
[ 5] 2.00-3.00 sec 109 MBytes 912 Mbits/sec 0 912 KBytes
[ 5] 3.00-4.00 sec 109 MBytes 912 Mbits/sec 0 1000 KBytes
[ 5] 4.00-5.00 sec 110 MBytes 923 Mbits/sec 0 1.06 MBytes
[ 5] 5.00-6.00 sec 106 MBytes 891 Mbits/sec 235 594 KBytes
[ 5] 6.00-7.00 sec 108 MBytes 902 Mbits/sec 31 554 KBytes
[ 5] 7.00-8.00 sec 109 MBytes 912 Mbits/sec 0 690 KBytes
[ 5] 8.00-9.00 sec 109 MBytes 912 Mbits/sec 0 802 KBytes
[ 5] 9.00-10.00 sec 108 MBytes 902 Mbits/sec 0 899 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.06 GBytes 911 Mbits/sec 281 sender
[ 5] 0.00-10.01 sec 1.06 GBytes 908 Mbits/sec receiver
iperf Done.
Can you try by clearing browser cache and autocomplete values or in another browser? It seems that it tries to save an autocomplete value as well and it causes the error.And it did the job!
Hi @GreenMatter, you should be able to do this with the Free Edition. Can you send a Problem Report via the User Interface?Hi @mb, I'm sending report right now. What about that netmap error?
kernel 569.404363 [1174] netmap_extra_free breaking with head 69798912I use vmxnet3 and native netmap driver. Is it anything to be worried about?
|vmx1_vlan1 |
vmx1----|vmx1_vlan11 |---port group id4095---vswitch2----Unifi switch port with tagged vlans
|vmx1_vlanx+1... | |
|
ESXi-------vmk0--------------port group id1-----|
And as Sensei required parent interface to be monitored, I had created it with network port vmx1 - and this was the reason of all these problems. So, I removed that parent interface and edited vlan1 to be just lan with network port vmx1 (it effectively has become untagged interface). Because of that I need to change settings of esxi port group and unifi switch port | |
vmx1----|vmx1_vlan11 |---port group id4095---vswitch2----Unifi switch port with native id1 and all other vlans tagged
|vmx1_vlanx+1... | |
|
ESXi-------vmk0--------------port group id0-----|
And once this was done, esxi interface did stop outputting huge amount of data and I could upgrade OPNsense to 20.7.This kernel has also support for native netmap support for vmx(4), VMware VMXNET3 Virtual Interface Controller device.
https://svnweb.freebsd.org/base?view=revision&revision=344272
Native netmap support should yield better performance compared to the emulated driver.
Much appreciated if someone with an existing VMware deployment could test & provide feedback.
PS: Please note that you'll need to set vmxnet3.netmap_native tunable to 1 (from System: Settings: Tunables) to enable native netmap mode.
Yes disable all offloading ...All hardware offloads have been disabled. But there must be some loop over there. As I use vlans in firewall, OPNsense is in trunking port group 4095 (VGT) and all, not vlan aware VM's, are tagged by vswitch (VST) and I communicate with them via tagged ports in physical switch (EST).
Sounds strange, especially on an OpenVPN interface. You know that your OpenVPN packets need to leave your box presumably through your WAN interface with MTU 1500? This will lead to a lot of fragmented traffic, I guess.Yes, it's strange but I can't find better one. OPNsense VM uses vmxnet3 and I have enabled hardware offload - shall I disable it?
If MTU of 24000 ist most reliable there is something wrong in general.In general, I don't have more fancy settings than outgoing VPN interface and vlan subnets. I have tried various configurations of mssfix/link-mtu/tun-mtu and fragment as well; and I ended up with:
For me it sounds you have a loop somewhere and there are too many packets also reaching the VM.
mssfix 0;for UDP connections. All LAN interfaces have MTU 1500
fragment 0;
tun-mtu 24000;
Which version of ESXi are you running?Used to have 6.7 but recently I upgraded ESXi to 7.01.
Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 02
fault virtual address = 0x0
fault code = supervisor write data, page not present
instruction pointer = 0x20:0xffffffff80e3b142
stack pointer = 0x28:0xfffffe00403f28d0
frame pointer = 0x28:0xfffffe00403f29a0
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 0 (if_io_tqg_1)
trap number = 12
panic: page fault
cpuid = 1
time = 1603450742
__HardenedBSD_version = 1200059 __FreeBSD_version = 1201000
version = FreeBSD 12.1-RELEASE-p10-HBSD #0 6e16e28f1bf(stable/20.7)-dirty: Tue Oct 20 13:30:19 CEST 2020
root@sensey64:/usr/obj/usr/src/amd64.amd64/sys/SMP
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00403f2580
vpanic() at vpanic+0x1a2/frame 0xfffffe00403f25d0
panic() at panic+0x43/frame 0xfffffe00403f2630
trap_fatal() at trap_fatal+0x39c/frame 0xfffffe00403f2690
trap_pfault() at trap_pfault+0x49/frame 0xfffffe00403f26f0
trap() at trap+0x29f/frame 0xfffffe00403f2800
calltrap() at calltrap+0x8/frame 0xfffffe00403f2800
--- trap 0xc, rip = 0xffffffff80e3b142, rsp = 0xfffffe00403f28d0, rbp = 0xfffffe00403f29a0 ---
iflib_rxeof() at iflib_rxeof+0x542/frame 0xfffffe00403f29a0
_task_fn_rx() at _task_fn_rx+0xc0/frame 0xfffffe00403f29e0
gtaskqueue_run_locked() at gtaskqueue_run_locked+0x144/frame 0xfffffe00403f2a40
gtaskqueue_thread_loop() at gtaskqueue_thread_loop+0x98/frame 0xfffffe00403f2a70
fork_exit() at fork_exit+0x83/frame 0xfffffe00403f2ab0
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe00403f2ab0
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
KDB: enter: panicI think caused by:2020-10-23T12:48:08 kernel KDB: enter: panic
2020-10-23T12:48:08 kernel panic() at panic+0x43/frame 0xfffffe00403f2630
2020-10-23T12:48:08 kernel vpanic() at vpanic+0x1a2/frame 0xfffffe00403f25d0
2020-10-23T12:48:08 kernel panic: page fault
2020-10-23T12:46:58 kernel KDB: enter: panic
2020-10-23T12:46:58 kernel panic() at panic+0x43/frame 0xfffffe00403f2630
2020-10-23T12:46:58 kernel vpanic() at vpanic+0x1a2/frame 0xfffffe00403f25d0
2020-10-23T12:46:58 kernel panic: page fault
2020-10-23T12:43:55 kernel KDB: enter: panic
2020-10-23T12:43:55 kernel panic() at panic+0x43/frame 0xfffffe00403f2630
2020-10-23T12:43:55 kernel vpanic() at vpanic+0x1a2/frame 0xfffffe00403f25d0
2020-10-23T12:43:55 kernel panic: page faultAnd I noticed that ESXi - 172.16.0.8 (on screenshot) host has quite high bandwidth output: hundred of GB for just browsing webgui...? It is visible on 20.1.9 traffic report. And when I refresh ESXi webgui Bandwidth Out hits 3,4 G...Resource limit matched Service RootFs
Date: Fri, 23 Oct 2020 12:32:00
Action: alert
Description: space usage 100.0% matches resource limit [space usage > 75.0%]Any ideas? Really I don't know where to start from.