OPNsense
  • Home
  • Help
  • Search
  • Login
  • Register

  • OPNsense Forum »
  • Profile of namezero111111 »
  • Show Posts »
  • Topics
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts...
      • Messages
      • Topics
      • Attachments

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments

Topics - namezero111111

Pages: [1] 2
1
22.1 Legacy Series / OpenVPN uses wrong source IP for firewall originated packages on VPN
« on: June 12, 2022, 08:30:21 pm »
Dear folks,

We have ran into a small issue with a new 22.1 installation regarding theLAN interface.
We have the following configuration

- The LAN interface is on igb0_vlan2
- LAN has an assigned IP address (say 192.168.1.1)
- Filtering is happening only on LAN interface
- Firewall "Shared forwarding" is enabled
   Disabling makes no difference
- OpenVPN client connection is used
- Static route for OpenVPN is added to Routes
 
The problem is that the OPNsense device itself is unable to send any packages via VPN, including ICMP, because the incorrect source IP is used (0.0.0.0) instead of the LAN or OpenVPN IP.

- Client connections from LAN to OpenVPN work
- Connections from remote OpenVPN network to LAN work
- Connections from remote OpenVPN network to LAN interface IP work
> Connections from local device to OpenVPN connection fail
    Here, the remote VPN gateway sees a source IP of 0.0.0.0 for the package, hence the connection fails
   Specififying the source IP manually works well
      ping -S 192.168.1.1 <destination>
      
      
Now, this seems to be specific to the bridging configuration as we have multiple setups (albeit older OPNSense versions) running well in this setup, but they don't have a bridged LAN interface.


What settings are we missing to make this work? Maybe interface metric somewhere?
This is required for scheduled backups for us for example.

Any pointer are greatly appreciated.

I've updated the post as a having a non-bridged interface makes no difference

I have only noticed that the VPN route has the "G" flag set and a gateway instead of link on the 18.1 version and it doesn't on 22.1:

Quote
192.168.0.0/16     link#11            US       ovpnc1


Quote
192.168.0.0/16     192.168.x.x UGS      ovpnc1


Any pointers would really help, thanks!
   

2
19.1 Legacy Series / Kernel Panic on boot after clean reboot
« on: June 19, 2019, 12:54:05 pm »
Dear folks,

we have an embedded device running 19.1 i386 nano version.
An admin rebooted the device via WebGUI and it never came back online.

Connecting via serial console, the output shows a failure to load netgraph.ko due to an undefined symbol.

Now we fixed this through reflashing and restoring, but this is very disconcerting to just write off as strange occurence.

We have excluded:
- CF card; it checks out OK
- Unclean shutdown; it was rebooted via webgui and fstab mounted with noatime,sync


We'd appreciate any input as to what might cause this so as to prevent this in the future..

Here's the output:

Quote
/boot/kernel/kernel text=0x1409269 data=0xee088+0x28e534 syms=[0x4+0xf6f50+0x4+0x18b906]
/boot/entropy size=0x1000
/boot/kernel/if_gre.ko text=0x3118 data=0x278+0x30 syms=[0x4+0xa30+0x4+0xab9]
/boot/kernel/if_tap.ko text=0x3734 data=0x2dc+0x34 syms=[0x4+0xa70+0x4+0x9f6]
/boot/kernel/pf.ko text=0x35ca8 data=0x4e8+0x1128 syms=[0x4+0x2490+0x4+0x2a0c]
/boot/kernel/carp.ko text=0x8150 data=0x374+0x74 syms=[0x4+0xe90+0x4+0xf1c]
/boot/kernel/if_bridge.ko text=0x7c40 data=0x350+0x3c syms=[0x4+0xff0+0x4+0x11f2]
loading required module 'bridgestp'
/boot/kernel/bridgestp.ko text=0x4994 data=0xe0+0x18 syms=[0x4+0x6d0+0x4+0x66b]
/boot/kernel/if_lagg.ko text=0xa024 data=0x294+0x28 syms=[0x4+0xf20+0x4+0x108a]
/boot/kernel/ng_UI.ko text=0x908 data=0x128 syms=[0x4+0x3a0+0x4+0x352]
loading required module 'netgraph'
/boot/kernel/netgraph.ko text=0xb348 data=0x474+0x8c syms=[0x4+0x14c0+0x4+0x1984]
/boot/kernel/ng_async.ko text=0x1b10 data=0x158 syms=[0x4+0x5b0+0x4+0x5e4]
/boot/kernel/ng_bpf.ko text=0x2370 data=0x158 syms=[0x4+0x5f0+0x4+0x66d]
/boot/kernel/ng_bridge.ko text=0x2604 data=0x158+0x20 syms=[0x4+0x6b0+0x4+0x780]
/boot/kernel/ng_cisco.ko text=0x1814 data=0x128 syms=[0x4+0x540+0x4+0x508]
/boot/kernel/ng_echo.ko text=0x4ec data=0x128 syms=[0x4+0x2f0+0x4+0x2f7]
/boot/kernel/ng_eiface.ko text=0x19f0 data=0x148+0x4 syms=[0x4+0x6e0+0x4+0x707]
/boot/kernel/ng_ether.ko text=0x2398 data=0x14c+0x4 syms=[0x4+0x760+0x4+0x7c9]
/boot/kernel/ng_frame_relay.ko text=0xe90 data=0x128 syms=[0x4+0x3e0+0x4+0x3be]
/boot/kernel/ng_hole.ko text=0x934 data=0x128 syms=[0x4+0x3c0+0x4+0x3b4]
/boot/kernel/ng_iface.ko text=0x1e04 data=0x178+0x4 syms=[0x4+0x6f0+0x4+0x746]
/boot/kernel/ng_ksocket.ko text=0x3208 data=0x158 syms=[0x4+0x850+0x4+0x94f]
/boot/kernel/ng_l2tp.ko text=0x3e64 data=0x158 syms=[0x4+0x720+0x4+0x7ce]
/boot/kernel/ng_l2tp.ko text=0x3e64 data=0x158 syms=[0x4+0x720+0x4+0x7ce]
can't load file '/boot/kernel/ng_l2tp.ko': input/output error
/boot/kernel/ng_lmi.ko text=0x24e0 data=0x128 syms=[0x4+0x4b0+0x4+0x43a]
/boot/kernel/ng_mppc.ko text=0x3ab0 data=0x25c+0x4 syms=[0x4+0x760+0x4+0x89d]
loading required module 'rc4'
/boot/kernel/rc4.ko text=0x3d0 data=0xe0 syms=[0x4+0x250+0x4+0x224]
/boot/kernel/ng_one2many.ko text=0x1420 data=0x128 syms=[0x4+0x500+0x4+0x592]
/boot/kernel/ng_ppp.ko text=0x601c data=0x158 syms=[0x4+0x8c0+0x4+0x974]
/boot/kernel/ng_pppoe.ko text=0x534c data=0x15c syms=[0x4+0x740+0x4+0x790]
/boot/kernel/ng_pptpgre.ko text=0x3068 data=0x128 syms=[0x4+0x5c0+0x4+0x5f8]
/boot/kernel/ng_rfc1490.ko text=0x12e8 data=0x128 syms=[0x4+0x440+0x4+0x41e]
/boot/kernel/ng_socket.ko text=0x2830 data=0x4a8+0x18 syms=[0x4+0x9e0+0x4+0xb4b]
/boot/kernel/ng_tee.ko text=0xe7c data=0x128 syms=[0x4+0x440+0x4+0x42b]
/boot/kernel/ng_tty.ko text=0x1724 data=0x148 syms=[0x4+0x570+0x4+0x4d4]
/boot/kernel/ng_vjc.ko text=0x2430 data=0x128 syms=[0x4+0x5c0+0x4+0x5d6]
/boot/kernel/ng_vlan.ko text=0x16d0 data=0x128 syms=[0x4+0x4f0+0x4+0x50e]
/boot/kernel/if_enc.ko text=0x1118 data=0x2b8+0x8 syms=[0x4+0x690+0x4+0x813]
/boot/kernel/pflog.ko text=0x10f0 data=0x11c+0x44 syms=[0x4+0x540+0x4+0x55b]
/boot/kernel/pfsync.ko text=0x7e5c data=0x228+0x160 syms=[0x4+0xd40+0x4+0xd84]
/boot/kernel/ng_car.ko text=0x1c94 data=0x1a0 syms=[0x4+0x540+0x4+0x543]
/boot/kernel/ng_deflate.ko text=0x1b34 data=0x174 syms=[0x4+0x600+0x4+0x6c0]
/boot/kernel/ng_pipe.ko text=0x2b0c data=0x158+0x1c syms=[0x4+0x6b0+0x4+0x6c1]
/boot/kernel/ng_pred1.ko text=0x1ac4 data=0x158 syms=[0x4+0x530+0x4+0x594]
/boot/kernel/ng_tcpmss.ko text=0xe74 data=0x128 syms=[0x4+0x420+0x4+0x465]
Booting...
KDB: debugger backends: ddb
KDB: current backend: ddb
Copyright (c) 1992-2017 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 11.1-RELEASE-p6  6621d681e(stable/18.1) i386
FreeBSD clang version 4.0.0 (tags/RELEASE_400/final 297347) (based on LLVM 4.0.0)
VT(vga): resolution 640x480
[HBSD HARDENING] procfs hardening: enabled
[HBSD ASLR] status: opt-out
[HBSD ASLR] mmap: 14 bit
[HBSD ASLR] exec base: 14 bit
[HBSD ASLR] stack: 14 bit
[HBSD ASLR] vdso: 8 bit
[HBSD LOG] logging to system: enabled
[HBSD LOG] logging to user: disabled
[HBSD SEGVGUARD] status: opt-out
[HBSD SEGVGUARD] expiry: 120 sec
[HBSD SEGVGUARD] suspension: 600 sec
[HBSD SEGVGUARD] maxcrashes: 5
link_elf: symbol ▒▒▒▒▒▒▒▒▒▒▒▒U▒▒SWV▒▒]▒▒x▒E▒M▒PQS▒u
▒
 undefined
KLD file netgraph.ko - could not finalize loading

KLD file ng_UI.ko - cannot find dependency "netgraph"
KLD file ng_async.ko - cannot find dependency "netgraph"
KLD file ng_bpf.ko - cannot find dependency "netgraph"
KLD file ng_bridge.ko - cannot find dependency "netgraph"
KLD file ng_cisco.ko - cannot find dependency "netgraph"
KLD file ng_echo.ko - cannot find dependency "netgraph"
KLD file ng_eiface.ko - cannot find dependency "netgraph"
KLD file ng_ether.ko - cannot find dependency "netgraph"
KLD file ng_frame_relay.ko - cannot find dependency "netgraph"
KLD file ng_hole.ko - cannot find dependency "netgraph"
KLD file ng_iface.ko - cannot find dependency "netgraph"
KLD file ng_ksocket.ko - cannot find dependency "netgraph"
KLD file ng_lmi.ko - cannot find dependency "netgraph"
KLD file ng_mppc.ko - cannot find dependency "netgraph"
KLD file ng_one2many.ko - cannot find dependency "netgraph"
KLD file ng_ppp.ko - cannot find dependency "netgraph"
KLD file ng_pppoe.ko - cannot find dependency "netgraph"
KLD file ng_pptpgre.ko - cannot find dependency "netgraph"
KLD file ng_rfc1490.ko - cannot find dependency "netgraph"
KLD file ng_socket.ko - cannot find dependency "netgraph"
KLD file ng_tee.ko - cannot find dependency "netgraph"
KLD file ng_tty.ko - cannot find dependency "netgraph"
KLD file ng_vjc.ko - cannot find dependency "netgraph"
KLD file ng_vlan.ko - cannot find dependency "netgraph"
KLD file ng_car.ko - cannot find dependency "netgraph"
KLD file ng_deflate.ko - cannot find dependency "netgraph"
KLD file ng_pipe.ko - cannot find dependency "netgraph"
KLD file ng_pred1.ko - cannot find dependency "netgraph"
KLD file ng_tcpmss.ko - cannot find dependency "netgraph"
CPU: Geode(TM) Integrated Processor by AMD PCS (498.06-MHz 586-class CPU)
  Origin="AuthenticAMD"  Id=0x5a2  Family=0x5  Model=0xa  Stepping=2
  Features=0x88a93d<FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CLFLUSH,MMX>
  AMD Features=0xc0400000<MMX+,3DNow!+,3DNow!>
real memory  = 268435456 (256 MB)
avail memory = 230801408 (220 MB)
pnpbios: Bad PnP BIOS data checksum
random: unblocking device.
Timecounter "TSC" frequency 498061502 Hz quality 800
taskqgroup_adjust failed cnt: 1 stride: 1 mp_ncpus: 1 smp_started: 0
taskqgroup_adjust failed cnt: 1 stride: 1 mp_ncpus: 1 smp_started: 0
random: entropy device external interface
wlan: mac acl policy registered
kbd0 at kbdmux0
panic: vm_fault: fault on nofault entry, addr: d18a8000
cpuid = 0
KDB: stack backtrace:
db_trace_self_wrapper(c2022a7c,56be1d6c,c1ad48f8,c1630899,c2022a44,...) at db_trace_self_wrapper+0x2a/frame 0xc20229a0
kdb_backtrace(0,0,0,d18a8000,d18a8000,...) at kdb_backtrace+0x2e/frame 0xc2022a00
vpanic(c1630899,c2022a44,c2022a44,c2022af8,c0f90355,...) at vpanic+0x10e/frame 0xc2022a24
panic(c1630899,d18a8000,c189c370,7af4800,1,...) at panic+0x14/frame 0xc2022a38
vm_fault_hold(c23e1000,d18a8000,1,0,0) at vm_fault_hold+0x1f55/frame 0xc2022af8
vm_fault(c23e1000,d18a8000,1,0) at vm_fault+0x69/frame 0xc2022b20
trap_pfault(d18a883c) at trap_pfault+0xcc/frame 0xc2022b64
trap(c2022c68) at trap+0x2b3/frame 0xc2022c5c
calltrap() at calltrap+0x6/frame 0xc2022c5c
--- trap 0xc, eip = 0xc0cff500, esp = 0xc2022ca8, ebp = 0xc2022cb4 ---
kobj_class_compile(c18360ac) at kobj_class_compile+0xc0/frame 0xc2022cb4
devclass_add_driver(c3de0b80,c18360ac,7fffffff,c1911b7c,c0cc9179,c1911b28,c1911b10) at devclass_add_driver+0x30/frame 0xc2022ccc
driver_module_handler(c3dab1c0,0,c1836094) at driver_module_handler+0x62/frame 0xc2022cfc
module_register_init(c1836088) at module_register_init+0xa0/frame 0xc2022d1c
mi_startup() at mi_startup+0x78/frame 0xc2022d38
begin() at begin+0x22
KDB: enter: panic
[ thread pid 0 tid 100000 ]
Stopped at      kdb_enter+0x35: movl    $0,kdb_why
db>

3
19.1 Legacy Series / MultiWAN failback state flushing / VoiP failover
« on: May 24, 2019, 12:54:38 pm »
Dear folks,

we are trying to setup an outgoing failover (Tier1/2) gateway group for a registered VoiP line.
So far this works well.

However, when the registration occurs via tier 2, and tier 1 comes back online, the registration stays on tier 2.

This results in the RTP data going out over tier 1 and hence being in a split state, ruining the system.

So I am wondering
1. Is there any way to kill states on tier 2 once tier 1 returns online?
2. Is there a better way to solve this?

Thank you in advance!

EDIT: Something like a "Disable State Killing on Gateway Reconnect" option, or the ability to run a custom script when a gateway comes online?

4
19.1 Legacy Series / [RESOLVED] OPNSense behind cable modem fails DHCP renewal
« on: March 06, 2019, 07:43:21 pm »
I see this has been discussed almost ad nauseam many times, but I could not find a satisfactory solution.

We have a few remotes sites behind OPNSense with cable modems in front of them. If there is a cable outage, OPNSense fails to renew the IP address and becomes unreachable.
Currently, our solution is to call/write the ISP and ask for a remote modem reset. This will cycle the link and nudge OPNSense back online. However, this is not a very satisfying solution.

We have tried the following:

  • Corn job as
Code: [Select]
/sbin/dhclient vr1
    [/li]
  • System->Settings->Cron => Periodic interface reset
       Will this actually do anything if there is an IP?? It seems as if this does nothing

We have also read about gateway monitoring; but this seems moot if no IP is available.
Problem is also that these are remote sites, so we'd like to have a solution that is known to work before we become frisky configuring away at far away sites :}

Is there any better way to fix this?
Thanks in advance...

Code: [Select]
Mar  6 05:19:18 OPNSense_host kernel: igb1: link state changed to DOWN
Mar  6 05:19:22 OPNSense_host kernel: igb1: link state changed to UP
Mar  6 05:20:17 OPNSense_host configd_ctl.py: error in configd communication  Traceback (most recent call last):   File "/usr/local/opnsense/service/configd_ctl.py", line 65, in exec_config_cmd     line = sock.recv(65536) timeout: timed out
Mar  6 05:20:17 OPNSense_host configd.py: [29714829-b357-40fc-8649-acb929050936] Linkup stopping igb1
Mar  6 05:20:17 OPNSense_host opnsense: /usr/local/etc/rc.linkup: DEVD Ethernet detached event for wan
Mar  6 05:20:17 OPNSense_host opnsense: /usr/local/etc/rc.linkup: The command '/sbin/dhclient -c /var/etc/dhclient_wan.conf igb1 > /tmp/igb1_output 2> /tmp/igb1_error_output' returned exit code '15', the output was ''
Mar  6 05:20:18 OPNSense_host configd.py: [96e26ea3-ff70-4063-a08b-d7aceb4779a9] Linkup starting igb1
Mar  6 05:20:18 OPNSense_host opnsense: /usr/local/etc/rc.linkup: DEVD Ethernet attached event for wan
Mar  6 05:20:18 OPNSense_host opnsense: /usr/local/etc/rc.linkup: HOTPLUG: Configuring interface wan
Mar  6 05:20:38 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: IP renewal is starting on 'igb1'
Mar  6 05:20:38 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: On (IP address: 68.200.7.180) (interface: WAN[wan]) (real interface: igb1).
Mar  6 05:20:40 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: ROUTING: setting IPv4 default route to 68.xxx.xxx.1
Mar  6 05:20:40 OPNSense_host configd.py: [9d6d6ae0-15f1-4bc9-a402-64450a0fea5b] updating dyndns WAN_DHCP
Mar  6 05:20:41 OPNSense_host configd.py: [4c62eb24-69f1-4776-9631-bca3e9cbcab8] Restarting OpenVPN tunnels/interfaces WAN_DHCP
Mar  6 05:20:41 OPNSense_host opnsense: /usr/local/etc/rc.openvpn: OpenVPN: One or more OpenVPN tunnel endpoints may have changed its IP. Reloading endpoints that may use WAN_DHCP.
Mar  6 05:20:41 OPNSense_host configd.py: [9717900f-6043-443b-9cb6-69abc20d10c8] Reloading filter
Mar  6 05:20:42 OPNSense_host configd.py: [9390544f-594d-404d-acf3-54548762a8bd] updating dyndns wan
Mar  6 05:20:42 OPNSense_host configd.py: unable to sendback response [OK ] for [interface][linkup][['start', 'igb1']] {eee6e08b-dfe0-4e72-8249-cdfc3e42dd2e}, message was Traceback (most recent call last):   File "/usr/local/opnsense/service/modules/processhandler.py", line 202, in run     self.connection.sendall('%s\n' % result)   File "/usr/local/lib/python2.7/socket.py", line 228, in meth     return getattr(self._sock,name)(*args) error: [Errno 32] Broken pipe
Mar  6 05:20:43 OPNSense_host configd.py: [25b8c1a1-f6ba-4959-a8ca-ef1aae506d05] generate template OPNsense/Filter
Mar  6 05:20:45 OPNSense_host configd.py: generate template container OPNsense/Filter
Mar  6 05:20:45 OPNSense_host configd.py: [09d48ade-ff54-443b-8326-36f47374ad0d] refresh url table aliases
Mar  6 05:20:46 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: Resyncing OpenVPN instances for interface WAN.
Mar  6 05:20:47 OPNSense_host configd.py: [200458c6-8665-4443-a624-6ede43036172] generate template OPNsense/Filter
Mar  6 05:20:49 OPNSense_host configd.py: generate template container OPNsense/Filter
Mar  6 05:20:50 OPNSense_host configd.py: [7b1166eb-c283-40ec-914a-13a58fe45da7] refresh url table aliases
Mar  6 05:20:50 OPNSense_host opnsense: /usr/local/etc/rc.linkup: ROUTING: setting IPv4 default route to 68.xxx.xxx.1
Mar  6 05:20:56 OPNSense_host configd.py: [379827d4-6f69-4c3a-80af-d4468d94eca6] updating dyndns wan
Mar  6 05:20:56 OPNSense_host configd.py: [cf83f0e6-efb9-4afa-a90a-7a14a2599016] Linkup stopping igb1
Mar  6 05:20:57 OPNSense_host opnsense: /usr/local/etc/rc.linkup: DEVD Ethernet detached event for wan
Mar  6 05:20:57 OPNSense_host opnsense: /usr/local/etc/rc.linkup: Clearing states to old gateway 68.xxx.xxx.1.
Mar  6 05:20:57 OPNSense_host configd.py: [9102f59c-4d37-4f80-a4a5-8ec6bc48401e] Linkup starting igb1
Mar  6 05:20:57 OPNSense_host opnsense: /usr/local/etc/rc.linkup: DEVD Ethernet attached event for wan
Mar  6 05:20:57 OPNSense_host opnsense: /usr/local/etc/rc.linkup: HOTPLUG: Configuring interface wan
Mar  6 05:20:58 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: IP renewal is starting on 'igb1'
Mar  6 05:20:58 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: On (IP address: 68.200.7.180) (interface: WAN[wan]) (real interface: igb1).
Mar  6 05:21:00 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: ROUTING: setting IPv4 default route to 68.xxx.xxx.1
Mar  6 05:21:00 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: Resyncing OpenVPN instances for interface WAN.
Mar  6 05:21:01 OPNSense_host configd.py: [4c6b0b2f-24a2-4df8-8363-c1d212c46552] generate template OPNsense/Filter
Mar  6 05:21:03 OPNSense_host configd.py: generate template container OPNsense/Filter
Mar  6 05:21:04 OPNSense_host configd.py: [13e0811e-1138-41ba-ad1a-003e217e568c] refresh url table aliases
Mar  6 05:21:04 OPNSense_host opnsense: /usr/local/etc/rc.linkup: ROUTING: setting IPv4 default route to 68.xxx.xxx.1
Mar  6 05:21:04 OPNSense_host kernel: igb1: link state changed to DOWN
Mar  6 05:21:13 OPNSense_host kernel: igb1: link state changed to UP
Mar  6 05:21:13 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: IP renewal is starting on 'igb1'
Mar  6 05:21:14 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: On (IP address: 68.200.7.180) (interface: WAN[wan]) (real interface: igb1).
Mar  6 05:21:15 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: ROUTING: setting IPv4 default route to 68.xxx.xxx.1
Mar  6 05:21:15 OPNSense_host configd.py: [d7b02eb3-ae54-43ae-acb9-5796be605e19] updating dyndns WAN_DHCP
Mar  6 05:21:15 OPNSense_host configd.py: [f11a16c5-ef55-4c2c-b3c9-a51ca87b4f13] updating dyndns wan
Mar  6 05:21:16 OPNSense_host configd.py: [174a3144-61ff-41f2-8569-37bf793f7bec] Restarting OpenVPN tunnels/interfaces WAN_DHCP
Mar  6 05:21:16 OPNSense_host opnsense: /usr/local/etc/rc.openvpn: OpenVPN: One or more OpenVPN tunnel endpoints may have changed its IP. Reloading endpoints that may use WAN_DHCP.
Mar  6 05:21:16 OPNSense_host configd.py: [3963ef40-3dc3-4d1a-9a0c-7af23c86764b] Reloading filter
Mar  6 05:21:16 OPNSense_host configd.py: [a4d0c284-3ae3-4dc6-aafd-fb6f4e0e99ee] Linkup stopping igb1
Mar  6 05:21:17 OPNSense_host opnsense: /usr/local/etc/rc.linkup: DEVD Ethernet detached event for wan
Mar  6 05:21:17 OPNSense_host opnsense: /usr/local/etc/rc.linkup: Clearing states to old gateway 68.xxx.xxx.1.
Mar  6 05:21:17 OPNSense_host configd.py: [1862f0c0-7438-4181-8d4c-20c779d11096] Linkup starting igb1
Mar  6 05:21:18 OPNSense_host opnsense: /usr/local/etc/rc.linkup: DEVD Ethernet attached event for wan
Mar  6 05:21:18 OPNSense_host opnsense: /usr/local/etc/rc.linkup: HOTPLUG: Configuring interface wan
Mar  6 05:21:18 OPNSense_host opnsense: /usr/local/etc/rc.linkup: The command '/sbin/dhclient -c /var/etc/dhclient_wan.conf igb1 > /tmp/igb1_output 2> /tmp/igb1_error_output' returned exit code '1', the output was ''
Mar  6 05:21:18 OPNSense_host opnsense: /usr/local/etc/rc.filter_configure: New alert found: There were error(s) loading the rules: /tmp/rules.debug:40: no translation address with matching address family found. - The line in question reads [40]: nat on igb1 inet from 192.168.0.0/16 to any -> igb1 port 1024:65535
Mar  6 05:21:18 OPNSense_host configd.py: [08c18255-1933-49d9-a1b3-00a4f47b8d31] updating dyndns WAN_DHCP
Mar  6 05:21:19 OPNSense_host configd.py: [c7d391b1-9398-478b-8729-b003c37ec261] Restarting OpenVPN tunnels/interfaces WAN_DHCP
Mar  6 05:21:19 OPNSense_host opnsense: /usr/local/etc/rc.openvpn: OpenVPN: One or more OpenVPN tunnel endpoints may have changed its IP. Reloading endpoints that may use WAN_DHCP.
Mar  6 05:21:19 OPNSense_host configd.py: [8740017c-a677-49dc-823f-2017f3557e7d] Reloading filter
Mar  6 05:21:21 OPNSense_host configd.py: [70aedf01-8786-4d41-b047-eb7d1762a864] generate template OPNsense/Filter
Mar  6 05:21:23 OPNSense_host configd.py: generate template container OPNsense/Filter
Mar  6 05:21:24 OPNSense_host configd.py: [d662204e-29cf-49b4-b969-f2edaa62f3ff] updating dyndns wan
Mar  6 05:21:24 OPNSense_host configd.py: [4d1fd158-5a4e-4b45-b16a-5c38cddf0c56] refresh url table aliases
Mar  6 05:21:24 OPNSense_host opnsense: /usr/local/etc/rc.newwanip: Resyncing OpenVPN instances for interface WAN.
Mar  6 05:21:26 OPNSense_host configd.py: [a94f1b68-f244-4dae-8586-a6f6fe61db02] generate template OPNsense/Filter
Mar  6 05:21:28 OPNSense_host configd.py: generate template container OPNsense/Filter
Mar  6 05:21:28 OPNSense_host configd.py: [2a58af7b-f4a0-4040-b859-019fdd55e47f] refresh url table aliases

5
18.1 Legacy Series / [RESOLVED] Port Forward not working - reply with wrong source port?
« on: June 12, 2018, 06:07:20 pm »
Dear folks,

I am not seeing this error right away. Trying to NAT and port forward with the following rule as attached.
WAN is 192.168.254.2/24 (NAT If)
LAN is 172.16.16.0/24 (Test If)

While the incoming request is seen, it seem like the outgoing reply is NATed separately with a wrong source port:

Code: [Select]
16:01:11.446958 IP 109.41.1.5.14631 > 192.168.254.2.8080: Flags [S], seq 568671719, win 14600, options [mss 1460,sackOK,TS val 460198899 ecr 0,nop,wscale 9], length 0
16:01:11.447756 IP 192.168.254.2.38922 > 109.41.1.5.14631: Flags [S.], seq 415419811, ack 568671720, win 14480, options [mss 1460,sackOK,TS val 190102564 ecr 460198899,nop,wscale 7], length 0
16:01:12.446936 IP 109.41.1.5.14631 > 192.168.254.2.8080: Flags [S], seq 568671719, win 14600, options [mss 1460,sackOK,TS val 460199899 ecr 0,nop,wscale 9], length 0
16:01:12.447656 IP 192.168.254.2.38922 > 109.41.1.5.14631: Flags [S.], seq 415419811, ack 568671720, win 14480, options [mss 1460,sackOK,TS val 190103563 ecr 460198899,nop,wscale 7], length 0
16:01:12.447755 IP 192.168.254.2.38922 > 109.41.1.5.14631: Flags [S.], seq 415419811, ack 568671720, win 14480, options [mss 1460,sackOK,TS val 190103564 ecr 460198899,nop,wscale 7], length 0
16:01:14.447865 IP 192.168.254.2.38922 > 109.41.1.5.14631: Flags [S.], seq 415419811, ack 568671720, win 14480, options [mss 1460,sackOK,TS val 190105564 ecr 460198899,nop,wscale 7], length 0

Hence, the connection never establishes.

Any idea how this could be misconfigured?

6
18.1 Legacy Series / Traffic Shaper: Default Queue [RESOLVED]
« on: April 21, 2018, 08:12:59 am »
Dear folks,

regarding an earlier issue with traffic shaping (https://forum.opnsense.org/index.php?topic=7836),

what is the proper way to set up a default rule to direct traffic into a queue that has not been matched by previous rules?

Given the rules as attached results in ipfw configuration as below:

Code: [Select]
60001     8797     1208768 queue 10017 udp from any to 192.168.4.2 dst-port 6010-6016 via em3
60002    13956    16872512 queue 10009 udp from 192.168.4.2 6010-6016 to any via em3
[...]
60019    13922    16826624 queue 10007 ip from 192.168.0.0/16 to any via em3
60020        4         320 queue 10012 ip from any to 192.168.0.0/16 via em3

Traffic matching sequence 1011 is matched by both rules 60002 and 60019 (hence pushed through the pipe twice by different queues).

The expected outcome would be that the traffic matched 60002 and is left alone by 60019.

7
18.1 Legacy Series / MultiWAN and FTP
« on: April 19, 2018, 05:12:00 pm »
Dear folks,

given a MultiWAN setup with Gateway group load balancing, there appears to be an issue with outgoing FTP where:

1. The control connection (21) is opened over GW1
2. The data connection is opened over GW2

In that case, listing the directory in passive mode fails.

Is the os-ft-proxy plugin able to handle this issue or is there a more elegant way short of configuring static data ports for FTP and forcing them into a failover group?

Thanks in advance!


8
18.1 Legacy Series / WFQ Traffic Shaper: Highnpacket loss in higher-priority queue
« on: April 07, 2018, 09:01:58 am »
Dear folks, suppose we have configured the traffic shaper in the following way:

Code: [Select]
Limiters:
10000:  6.000 Mbit/s    0 ms burst 0
q141072  50 sl. 0 flows (1 buckets) sched 75536 weight 0 lmax 0 pri 0 droptail
 sched 75536 type FIFO flags 0x0 0 buckets 0 active

Queues:

q10003  50 sl. 0 flows (1 buckets) sched 10000 weight 50 lmax 1500 pri 0 droptail
  0 ip           0.0.0.0/0             0.0.0.0/0        1      549  0    0   0
q10000  50 sl. 1 flows (1 buckets) sched 10000 weight 10 lmax 1500 pri 0 droptail
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 ip           0.0.0.0/0             0.0.0.0/0     5356  6023026 42 47485 208

Whenever traffic through q10000 gets saturated (here: on purpose), there is significant packet loss on q10003 (catch-all for all traffic not assigned to q10000).
Here, a simple ICMP going through q10003 experiences heavy packet loss (15%).

My understanding is that regardless of the traffic in q10000, there should be 5/6 (=5mbps) bandwidth available to q10003 (plenty for an ping), whereas q10000 should receive 1/6 (=1mpbs).
The packages are actually dropped, not delayed: Adding a 3000ms wait to the ping also makes it time out.

We have also tried activating CoDel, with no noticable effect.

What's the best way to ensure q10003 will always have a guarantee of the bandwidth?



9
18.1 Legacy Series / Gateway groups and NAT : Incorrect NAT IP used on interface.
« on: April 03, 2018, 01:01:23 pm »
Dear folks, we are debugging a strange occurrence in 18.1.6 with load balancing and outbound NAT.

When the machine is newly booted,  the wrong NAT IP is used.

Example:
DMZ1 192.168.4.131 with CARP 192.168.4.135
DMZ2 192.168.4.141 with CARP 192.168.4.150

DMZ1 and DMZ2 gateways are in a gateway group.
NAT 192.168.0.0/16 on DMZ1 via 192.168.4.135
NAT 192.168.0.0/16 on DMZ2 via 192.168.4.150

Pings time out/website won't load, etc...
tcpdump capture on DMZ2 show packages originating from 192.168.4.135 (DMZ1 CARP IP)

When NAT rules are reversed in priority,
tcpdump capture on DMZ1 show packages originating from 192.168.4.150 (DMZ2 CARP IP)

When "Sticky outbound NAT" is disabled, the problem disappears. However, is this setting not required for i.e. banking websites?
is there another, better way for load balancing by source except multiple firewall rules?

The problem persist when translating onto Interface IP rather than CARP IP.

Has anyone experienced anything like that?
I couldn't find anything related to that anywhere on here or github unfortunately.


Any suggestions?



10
18.1 Legacy Series / Run custom script on bootup (add static ARP) [resolved]
« on: March 19, 2018, 01:20:48 pm »
Hi folks,

we are trying to run a custom script on bootup. I found a thread about this from 2015 (https://forum.opnsense.org/index.php?topic=274.0), is there a better (more friendly way by now).

We are trying to make a permanent ARP entry that survives a reboot:

Quote
arp -S 172.16.16.9 11:54:33:A8:B2:6B

I found a way via creating /usr/local/etc/rc.syshook.d/95-staticarp.start (chmod 755) and adding the desired command.
However, this does not survive a link cycle...

Maybe there is a nicer way to do this in the first place that we are simply not aware of??

EDIT
It seems like adding a static ARP on DHCP works; is this ok even if DHCP server is disabled?

11
18.1 Legacy Series / Multi-WAN, Policy Routing, and Traffic Shaping
« on: February 16, 2018, 01:58:28 pm »
Dear folks, in the past there seem to have been problems with mixing Shaping into Gateway redirect rules.
(see https://github.com/opnsense/core/issues/1230 ).

We are testing a deployment on 18.1.2 amd64, and are running into a similar problem:

1. A gateway group has been defined (2 GW's on different interfaces, A and B)
2. A firewall rule has been created to redirect specific traffic via the gateway group
3. Manual NAT is enabled
4. Traffic shaping for the two interfaces (A and B) is created
   - A pipe each with WFQ w/ codel
   - A "catch-all" default rule redirecting traffic into the pipe
5. Advanced->" Shared forwarding " is enabled (ticked)

Expected outcome: Traffic is limited to the pipe's bandwidth

Actual outcome: No limiting is applied, and the queue shows no connection/limiting in the status section.

Is there anything obvious missing?
In other setups the shaping (without Multi-WAN) works alright.

Thanks in advance!

Edit: I would like to add that no rules under "ipfw show" redirect anything into pipes or queues are showing up, even after reset and apply on the traffic shaper:
Quote
00100      3      2002 allow pfsync from any to any
00110     21      1176 allow carp from any to any
00120      0         0 allow ip from any to any layer2 mac-type 0x0806,0x8035
00130      0         0 allow ip from any to any layer2 mac-type 0x888e,0x88c7
00140      0         0 allow ip from any to any layer2 mac-type 0x8863,0x8864
00150      0         0 deny ip from any to any layer2 not mac-type 0x0800,0x86dd
00200      0         0 skipto 60000 ip6 from ::1 to any
00201      0         0 skipto 60000 ip4 from 127.0.0.0/8 to any
00202      0         0 skipto 60000 ip6 from any to ::1
00203      0         0 skipto 60000 ip4 from any to 127.0.0.0/8
01001      0         0 skipto 60000 udp from any to 192.168.4.6 dst-port 53 keep-state :default
01001     21      2114 skipto 60000 ip from any to { 255.255.255.255 or 192.168.4.6 } in
01001     28      8526 skipto 60000 ip from { 255.255.255.255 or 192.168.4.6 } to any out
01001      0         0 skipto 60000 icmp from { 255.255.255.255 or 192.168.4.6 } to any out icmptypes 0
01001      0         0 skipto 60000 icmp from any to { 255.255.255.255 or 192.168.4.6 } in icmptypes 8
01002      0         0 skipto 60000 udp from any to 192.168.4.77 dst-port 53 keep-state :default
01002      0         0 skipto 60000 ip from any to { 255.255.255.255 or 192.168.4.77 } in
01002      0         0 skipto 60000 ip from { 255.255.255.255 or 192.168.4.77 } to any out
01002      0         0 skipto 60000 icmp from { 255.255.255.255 or 192.168.4.77 } to any out icmptypes 0
01002      0         0 skipto 60000 icmp from any to { 255.255.255.255 or 192.168.4.77 } in icmptypes 8
01003      0         0 skipto 60000 udp from any to 192.168.4.131 dst-port 53 keep-state :default
01003      0         0 skipto 60000 ip from any to { 255.255.255.255 or 192.168.4.131 } in
01003      0         0 skipto 60000 ip from { 255.255.255.255 or 192.168.4.131 } to any out
01003      0         0 skipto 60000 icmp from { 255.255.255.255 or 192.168.4.131 } to any out icmptypes 0
01003      0         0 skipto 60000 icmp from any to { 255.255.255.255 or 192.168.4.131 } in icmptypes 8
01004      0         0 skipto 60000 udp from any to 192.168.4.146 dst-port 53 keep-state :default
01004      0         0 skipto 60000 ip from any to { 255.255.255.255 or 192.168.4.146 } in
01004      0         0 skipto 60000 ip from { 255.255.255.255 or 192.168.4.146 } to any out
01004      0         0 skipto 60000 icmp from { 255.255.255.255 or 192.168.4.146 } to any out icmptypes 0
01004      0         0 skipto 60000 icmp from any to { 255.255.255.255 or 192.168.4.146 } in icmptypes 8
06000      0         0 skipto 60000 tcp from any to any out
06199      0         0 skipto 60000 ip from any to any
30000      0         0 count ip from any to any
60000      0         0 return ip from any to any
65535 682972 605826657 allow ip from any to any

However, the queues are defined, as shown by ipfw queues show:
Quote
q10006  50 sl. 0 flows (1 buckets) sched 10000 weight 40 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10007  50 sl. 0 flows (1 buckets) sched 10001 weight 40 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10004  50 sl. 0 flows (1 buckets) sched 10001 weight 70 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10005  50 sl. 0 flows (1 buckets) sched 10002 weight 70 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10002  50 sl. 0 flows (1 buckets) sched 10002 weight 95 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10003  50 sl. 0 flows (1 buckets) sched 10002 weight 80 lmax 0 pri 0  AQM CoDel target 5ms interval 500ms NoECN
q10000  50 sl. 0 flows (1 buckets) sched 10000 weight 95 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10001  50 sl. 0 flows (1 buckets) sched 10001 weight 95 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10014  50 sl. 0 flows (1 buckets) sched 10002 weight 20 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10015  50 sl. 0 flows (1 buckets) sched 10002 weight 55 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10010  50 sl. 0 flows (1 buckets) sched 10002 weight 65 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10008  50 sl. 0 flows (1 buckets) sched 10002 weight 40 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10009  50 sl. 0 flows (1 buckets) sched 10001 weight 65 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10016  50 sl. 0 flows (1 buckets) sched 10001 weight 55 lmax 0 pri 0  AQM CoDel target 5ms interval 100ms NoECN

12
18.1 Legacy Series / Static ARP entry when not using DHCP [RESOLVED]
« on: February 11, 2018, 05:42:00 pm »
Dear folks,

We would like to add a static ARP entry to OPNsense for multicast.

There is only an option on DHCP, but we don't use DHCP.

Is there a way via the GUI to add a static IP <=> ARP entry that we have missed?

Thanks in advance!

13
18.1 Legacy Series / Traffic Shaper Pipes - expose packet loss & delay
« on: February 02, 2018, 01:24:05 pm »
Dear folks,

As far as I can see creating artificial delay and packet loss is not exposed via the WebGUI for pipes.
Would those parameters be terribly difficult to add?

This is useful in test environments to simulate network conditions.

Do you think this would be useful?

14
18.1 Legacy Series / Queue statistics
« on: January 29, 2018, 08:34:16 pm »
Hello folks,

I have a question regarding the monitoring scripts I'm writing.
I am trying to obtain queue information.

So from the ipfw manual:
Quote
Statistics

Per-flow queueing can be useful for a variety of purposes. A very simple one is counting traffic:

ipfw add pipe 1 tcp from any to any
ipfw add pipe 1 udp from any to any
ipfw add pipe 1 ip from any to any
ipfw pipe 1 config mask all

The above set of rules will create queues (and collect statistics) for all traffic. Because the pipes have no limitations, the only effect is collecting statistics. Note that we need 3 rules, not just the last one, because when ipfw tries to match IP packets it will not consider ports, so we would not see connections on separate ports as different ones.

Cool. So with /sbin/ipfw queue show I can get this output (same as shaper diag page):
Quote
q10006  50 sl. 1 flows (1 buckets) sched 10001 weight 50 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 ip           0.0.0.0/0             0.0.0.0/0        8     1528  0    0   0
q10007  50 sl. 0 flows (1 buckets) sched 10000 weight 50 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10004  50 sl. 1 flows (1 buckets) sched 10000 weight 99 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
  0 ip           0.0.0.0/0             0.0.0.0/0     2863  2857626  0    0 167
q10005  50 sl. 1 flows (1 buckets) sched 10001 weight 95 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
  0 ip           0.0.0.0/0             0.0.0.0/0        2     1300  0    0   0
q10002  50 sl. 1 flows (1 buckets) sched 10000 weight 80 lmax 1500 pri 0  AQM CoDel target 5ms interval 500ms NoECN
  0 ip           0.0.0.0/0             0.0.0.0/0        6      960  0    0   0
q10003  50 sl. 0 flows (1 buckets) sched 10000 weight 75 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10000  50 sl. 0 flows (1 buckets) sched 10000 weight 20 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10001  50 sl. 0 flows (1 buckets) sched 10000 weight 70 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN
q10008  50 sl. 0 flows (1 buckets) sched 10001 weight 60 lmax 1500 pri 0  AQM CoDel target 5ms interval 500ms NoECN
q10009  50 sl. 0 flows (1 buckets) sched 10001 weight 20 lmax 1500 pri 0  AQM CoDel target 5ms interval 100ms NoECN

Cool. Combine that with data from $config["OPNsense"]["TrafficShaper"] and away we go.

However, it seems that this only collect current information about the queues (i.e. backlog).
From the ipfw manual it's unclear as to how to interpret the output (or I am too confused to read it).

Before, on pf with altq we'd use the output from /sbin/pfctl -vsq, which would give incremental counters over intervals.
Is there a similar command or a better way to monitor queue statistics for nagios RRD graph generation?


15
18.1 Legacy Series / Prevent SFTP login
« on: January 24, 2018, 07:34:34 pm »
HI,

when creating a new user with no privileges assigned, this user can SFTP to the OPNsense and browse anywhere outside its home directory, e.g. /conf and happily retrieve the config.xml with keys and TLS and everything.

Again; "System: Shell account access" privilege is not needed.

The user has no privileges assigned at all.

How to reproduce:
1. System->Access->Users
   Create user "test", assign no privileges
2. Login via SFTP with the username and password.

This surely cannot be desired/intentional ?

Pages: [1] 2
OPNsense is an OSS project © Deciso B.V. 2015 - 2023 All rights reserved
  • SMF 2.0.19 | SMF © 2021, Simple Machines
    Privacy Policy
    | XHTML | RSS | WAP2