Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - zemanek

#1
25.1, 25.4 Series / Re: SNAT working selectively
May 21, 2025, 04:56:36 PM
Also I have set



net.inet.ipsec.filtertunnel   = 0x0001
net.inet6.ipsec6.filtertunnel = 0x0001
net.enc.out.ipsec_bpf_mask    = 0x0000
net.enc.out.ipsec_filter_mask = 0x0000
net.enc.in.ipsec_bpf_mask     = 0x0000
net.enc.in.ipsec_filter_mask  = 0x0000

#2
25.1, 25.4 Series / Re: SNAT working selectively
May 21, 2025, 04:19:41 PM
If I change
nat on ena0 inet from any to 10.1.1.0/24 -> (ena0:0) port 1024:65535
to
nat on ena0 inet from any to 10.1.1.247 -> 10.112.0.178 port 1024:65535
it does not work either, the outgoing packet's source IP is still VTI's IP address, not 10.112.0.178.

The other difference between BGP communication and ICMP communication is that 192.168.203.68 has a static route with UGHS flags while 10.1.1.0/24 is BGP injected route with UG1 flags.
#3
25.1, 25.4 Series / SNAT working selectively
May 21, 2025, 03:03:53 PM
Hello,

I have OPNsense 25.1 with one (WAN) interface and 4 VTI interfaces for 4 VPNs. I also have this NAT configuration:

# pfctl -s nat
no nat proto carp all
nat on ena0 inet from any to 192.168.202.68 -> 10.112.0.178 port 1024:65535
nat on ena0 inet from any to 192.168.202.69 -> 10.112.0.178 port 1024:65535
nat on ena0 inet from any to 192.168.203.68 -> 10.112.0.178 port 1024:65535
nat on ena0 inet from any to 192.168.203.69 -> 10.112.0.178 port 1024:65535
nat on ena0 inet from any to 10.0.1.0/24 -> (ena0:0) port 1024:65535
nat on ena0 inet from any to 10.1.1.0/24 -> (ena0:0) port 1024:65535
nat-anchor "acme-client/*" all
no rdr proto carp all
no rdr on ena0 proto tcp from any to (ena0) port = ssh
no rdr on ena0 proto tcp from any to (ena0) port = http
no rdr on ena0 proto tcp from any to (ena0) port = 10443
rdr-anchor "acme-client/*" all

where 10.112.0.178 is an IP alias on WAN interface (primary IP 10.100.178.10) (serves as my BGP router IP).

Now when BGP (frr plugin) talks to 192.168.203.68 BGP peer, the source IP of outgoing packets through the VTI (VPN) is correctly replaced with 10.112.0.178.
But when I try to PING 10.1.1.247, the outgoing packets through the VTI keep VTI's IP address (10.101.178.18) instead of being replaced with WAN IP (10.100.178.10).

Why?
#4
QuoteSo: what hypervisor is the provider using? KVM? Can you set the machine type to "q35" and the CPU type to "host"?
I don't know, I have no access to the hypervisor.


Anyway, I don't have any issues with these instances while they are running (VPN stable, ha-proxy working, ...) - well, for NOW -, just the upgrade/plugin installation is suspicious.
#5
The file contains just

18.7.1_3-445ae2139
#6
OK, I switched VM from AMD EPYC to Intel Xeon with 4GB RAM and tried installing some small plugin (microcode-intel):

Number of packages to be installed: 6

The process will require 23 MiB more space.
[1/6] Installing pciids-20250309...
[1/6] Extracting pciids-20250309: ..... done
[2/6] Installing cpu-microcode-rc-1.0_2...
[2/6] Extracting cpu-microcode-rc-1.0_2: .... done
[3/6] Installing libpci-3.13.0...
[3/6] Extracting libpci-3.13.0: .......... done
Segmentation fault
[4/6] Installing x86info-1.31.s03_1...
[4/6] Extracting x86info-1.31.s03_1: ....... done
[5/6] Installing cpu-microcode-intel-20250211...
[5/6] Extracting cpu-microcode-intel-20250211: .......... done
[6/6] Installing os-cpu-microcode-intel-1.1...
[6/6] Extracting os-cpu-microcode-intel-1.1: .. done
Reloading firmware configuration
Ignoring invalid metadata: /usr/local/opnsense/version/opnsense
Writing firmware settings: FreeBSD OPNsense
Writing trust files...done.

Then uninstalled:

Number of packages to be removed: 5

The operation will free 23 MiB.
[1/5] Deinstalling x86info-1.31.s03_1...
[1/5] Deleting files for x86info-1.31.s03_1: ....... done
[2/5] Deinstalling libpci-3.13.0...
[2/5] Deleting files for libpci-3.13.0: .......... done
Segmentation fault
[3/5] Deinstalling cpu-microcode-intel-20250211...
[3/5] Deleting files for cpu-microcode-intel-20250211: .......... done
[4/5] Deinstalling pciids-20250309...
[4/5] Deleting files for pciids-20250309: ..... done
[5/5] Deinstalling cpu-microcode-rc-1.0_2...
[5/5] Deleting files for cpu-microcode-rc-1.0_2: .... done
***DONE***
#7
Now it has 2GB, Lobby dashboard says 31% utilization.
#8
Sorry, I do not have control over the hypervisor. It's a cloud VM.
#9
I added 1GB RAM and removed the microcode plugin. During removal again Segmentation fault and dmesg:
pid 61978 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
#10
It's a VM. I do have plugin installed, but
CPU microcode: no matching update found
CPU: AMD EPYC 7571 (2199.99-MHz K8-class CPU)

Will try with more RAM.
#11
pid 86448 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 98965 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 12770 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 22219 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 31720 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 38510 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 62016 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 75000 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 88570 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 97221 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 7282 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 17994 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 32113 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 31038 (pkg-static), jid 0, uid 0, was killed: failed to reclaim memory
pid 8123 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 16784 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 36473 (ld-elf32.so.1), jid 0, uid 0: exited on signal 11 (no core dump - bad address)
pid 91092 (pkg-static), jid 0, uid 0, was killed: failed to reclaim memory
#12
Hello,

during every upgrade I see several segmentation faults like in this extract:
[39/104] Installing brotli-1.1.0,1...
[39/104] Extracting brotli-1.1.0,1: .......... done
Segmentation fault
[40/104] Upgrading nspr from 4.35 to 4.36...
[40/104] Extracting nspr-4.36: .......... done
Segmentation fault
[41/104] Upgrading py311-numexpr from 2.10.1 to 2.10.2...
[41/104] Extracting py311-numexpr-2.10.2: .......... done
[42/104] Upgrading libltdl from 2.4.7 to 2.5.4...
[42/104] Extracting libltdl-2.5.4: .......... done
Segmentation fault
[43/104] Upgrading oniguruma from 6.9.9 to 6.9.10...
[43/104] Extracting oniguruma-6.9.10: .......... done
Segmentation fault
[44/104] Upgrading php82-session from 8.2.23 to 8.2.27...
[44/104] Extracting php82-session-8.2.27: .......... done
Should I be worried? Maybe not related but during upgrade to the last 24.x version the upgrade hung during this phase. Subsequent upgrades finished though.
#13
Damn! Now I feel stupid. I forgot to add user column after moving it to system wide cron.d/.

Sorry.
#14
@cookiemonster I'm afraid I don't understand the question. When CRON job defined in a file in /usr/local/etc/cron.d/ never runs it is not a failure to you? When it was in /var/cron/tabs/root, it worked.
#15
Hello,

since /etc/crontab is purged of custom entries every time I update CRON jobs via OPNsense web UI (System-->Settings-->Cron) I moved the custom line to /usr/local/etc/cron.d/ as suggested in /etc/crontab:

# or /usr/local/etc/cron.d and follow the same format as
# /etc/crontab, see the crontab(5) manual page.

But it never runs. So what is crippled in OPNsense in comparison with pure FreeBSD?