Recent posts

#1
26.1 Series / Re: FRR BGP peer with MD5 pass...
Last post by Odjuret - Today at 08:20:09 PM
It looks like /usr/local/opnsense/scripts/frr/register_sas is not being run.
#2
26.1 Series / FRR BGP peer with MD5 password...
Last post by Odjuret - Today at 08:08:52 PM
There is a thread in "23.1 Legacy Series" about how FRR BGP with MD5 should work out of the box, but the setkey does not actually set the md5-key.

I tried to set the md5-key manually as described the old forum post, and the peer went up.
My setting looks right, I have the "Local Initiater IP" filled in with the source IP I want in my setkey, and the peer IP as the peer. But no keys created.

Was this working and is now broken again?
#3
Tutorials and FAQs / Re: IPv6 Control Plane with FQ...
Last post by Seimus - Today at 08:01:11 PM
The weight doesn't do anything when using FQ_CoDel. The FQ in the FQ_CoDel does what it means, fair queuing.

So basically FQ_CoDel, doesn't do any priority queuing nor weighted queuing

In theory even this design should not cause to much harm for control plane, as FQ_CoDel will not let starve any flow under the condition there is not an excessive amount of flows. It can however create extra DROP and TAIL DROP for the control plane e.g create AQM back-pressure.

What is interesting here is that when using FQ_CoDel and not multiples schedulers your tests show good results. So the question is, how and why two different scheduler for two different traffic planes impacted the results? Considering the Control plane if configured properly should not carry over any data plane e.g test traffic for the test.

Regards,
S.
#4
Tutorials and FAQs / Re: [HOWTO] OpnSense under vir...
Last post by snulgy - Today at 07:30:08 PM
more troubleshooting (still clueless). I picked the worst offender, 192.168.50.116 to 192.168.50.102 (the latter IP being one of the new VM dual home NICs). neither are an opnsense interface

root@OPNsense# tcpdump -ni vtnet0_vlan50  "src 192.168.50.116 and dst 192.168.50.102"-> shows tons of traffic as expected on a promiscuous & bridged interface in this network

root@OPNsense# tcpdump -ni vtnet0_vlan50 -p "(ether host $OPNSENSEMAC or ether broadcast) and src 192.168.50.116"-> no more promiscuous, this shows exactly what I would expect, traffic from this source host leaving (or trying to leave) this subnet, being sent to opnsense. destinations here are never in 192.168.50.0/24

root@OPNsense# tcpdump -ni vtnet0_vlan50 -p "(ether host $OPNSENSEMAC or ether broadcast) and src 192.168.50.116 and dst 192.168.50.102"-> again shows nothing, which makes sense as local traffic should not be addressed to the opnsense MAC address. yet during this test my opnsense logs keep filling up with drops for this exact source/destination/interface pattern?!

#5
Tutorials and FAQs / Re: [HOWTO] OpnSense under vir...
Last post by snulgy - Today at 06:36:54 PM
Quote from: nero355 on Today at 05:53:29 PMEven when it's disabled it still occures because of :
Quotethere are only 3 routes which are correct (to each local network on the right interface, plus the default gateway which again points to the right interface).
When your Client/Server receives packets from Network X via Gateway Z the data won't go back via Gateway Z if the Client/Server has a NIC that it also connected to Network X and will use that NIC to send the data back to Network X.

That's an excellent point that I had not fully thought through but I think you're right about this being one of the factors I am dealing with. I actually have a few different dual homed machines on the network (a couple that still have an interface in the native VLAN in addition to their new home for example). And this should explain some of the random drops I have been seeing.

But in terms of the high volume of drops that has become very obvious recently, they are dropped by opnsense for invalid state and logs show those packets have both their source and destination IP in the same VLAN/subnet (and neither source nor destination are an opnsense interface obviously). Logs show those drops are for INcoming packets on the FW interface of that VLAN/subnet. But those shouldn't be routed to the firewall and given it's local traffic (ARP finds the neighbor, traceroute says it's one hop...), I still can't explain this by an asymmetric routing issue. I am missing something ...

another puzzling thing is that everything works. I fill up logs with drop events, but those flows all succeed. If I had a serious asymmetric routing issue I would expect to experience network problems, but I do not.
#6
OPNsense version: OPNsense 25.7.4 (amd64)
IPsec configuration method: VPN → IPsec → Connections (swanctl)
Deployment: AWS EC2 (2 NIC – WAN + LAN)
Azure side: Azure Virtual Network Gateway (route-based VPN)

Topology:

AWS VPC: 10.2.0.0/16
Azure VNet: also 10.2.0.0/16 (overlapping)
Target resource in Azure: 172.18.5.4 (SQL MI endpoint)
NAT on OPNsense:
Source: 10.2.0.0/16
Destination: 172.18.5.4
Translated to: 172.31.255.1
Virtual IP configured on OPNsense:
172.31.255.1/32 (IP Alias)

Goal:
Allow AWS workloads (10.2.0.0/16) to access Azure resource 172.18.5.4, using NAT to avoid overlapping address space.

IPsec Configuration:

Phase 1 (Connection):

IKEv2
Local address: 10.2.0.171 (WAN private IP)
Remote address: Azure VPN Gateway public IP
UDP encapsulation enabled

Child SA:

Local: 172.31.255.1/32
Remote: 172.18.5.4/32
Mode: Tunnel


Observed Behavior:

1. With Policies OFF
Tunnel establishes successfully (IKE + CHILD SA)
NAT works (traffic translated correctly)
But logs show repeated:
querying policy 172.31.255.1/32 === 172.18.5.4/32 out failed, not found
Traffic does not flow

2. With Policies ON
Behavior changes significantly:
Traffic uses UDP 500 instead of 4500 (no NAT-T)
NAT appears to be bypassed
Azure side no longer sees expected source IP
Tunnel unstable / traffic fails

What I've already verified:

NAT rule is correct and hit counters increment
Virtual IP (172.31.255.1) is present and active
Azure side configured with matching selectors
Azure uses route-based gateway
Security groups / NSGs allow traffic
Tunnel consistently establishes (Phase 1 + Phase 2)

What I've already verified:

NAT rule is correct and hit counters increment
Virtual IP (172.31.255.1) is present and active
Azure side configured with matching selectors
Azure uses route-based gateway
Security groups / NSGs allow traffic
Tunnel consistently establishes (Phase 1 + Phase 2)

What I'm trying to determine:

Whether this is:
Misconfiguration on my part
OR a limitation of the current IPsec implementation in OPNsense

Appreciate any guidance, especially from anyone who has successfully implemented NAT with overlapping networks in the current (swanctl) IPsec model.

#7
Hi at all,
I also want to update to specific version.
Because sometimes there are problems if you not install update for update.
Is is possible to install a specific version?
Regards Arthur
#8
Tutorials and FAQs / Re: [HOWTO] OpnSense under vir...
Last post by nero355 - Today at 05:53:29 PM
Quote from: snulgy on Today at 05:19:42 PMI did of course double check those settings as I mentioned - packet forwarding is off for IPv4 & 6 (I actually disabled v6 entirely for now to rule out issues)
Even when it's disabled it still occures because of :
Quotethere are only 3 routes which are correct (to each local network on the right interface, plus the default gateway which again points to the right interface).
When your Client/Server receives packets from Network X via Gateway Z the data won't go back via Gateway Z if the Client/Server has a NIC that it also connected to Network X and will use that NIC to send the data back to Network X.

QuoteIt does smell like asymmetric routing but I haven't yet figured out how this can possibly happen here.
I am working on a document about this, but it's not finished yet because I am writing it many months after having dealt with such an issue so I need to double check a lot of things.

If it turns out you need to do something to fix A-Symmetric Routing let me know and I will try to help you as much as possible!
#9
26.1 Series / Re: os-nut: Broken plugin kill...
Last post by nero355 - Today at 05:46:00 PM
Quote from: hakuna on Today at 05:46:32 AMSSD/NVMe is electronic, I cannot trust cutting the power while in halt, won't damage it coz it is still in operational mode.
The issue with SSD's is that they have become a mess over time :

- First there were "Real SSD's" as I call them : Both Caching RAM and Power Loss Prevention Capacitors that would guard that RAM and the data on the SSD.
Simple example of such a SSD : Intel 320 Series and everything that was Workstation/Enterprise level at the time.
Later on there were the Crucial M500 DC but these were different than the next group =>

- Then there were suddenly SSD's where this became a 50/50 deal : Only the RAM and the Index of the NAND is guarded. So not the actual data !!
Simple example of such a SSD : Crucial M500

- But then everything went to hell basically :
Samsung started selling "Pro SSD's" which were not Pro at all...
They do have Caching RAM but it's protection is... NONE.
A so called Write Back procedure is used during the next boot and does some checks and that's it then : Maybe it goes well, maybe it doesn't !!

The worst thing about this is that many other brands followed and even tho the market got flooded with them no one cared somehow ?! :(

- The next step was suddenly selling SSD's without any Cache at all !!
They are the cheapest, they work, but really : What are we doing here ??
How did we allow things to get this far ?!

And this was just the SATA era of SSD's...

All of this got applied immediately when the sale of NVMe SSD's started so now your super fast travelling data has even more chance to get corrupted because of that speed !!

YAY ?!?!



/End of rant.

QuoteI will die in this hill, halt is not and will never be a shutdown.
I agree with you that the whole thing should be automatic and hassle free and most of all clean :)

Quote from: lmoore on Today at 07:53:53 AMDuring the evolution of Hard Disk Drives, upon power failure, the IDE drives (and others too) were designed to retract the heads to the landing zone.
But AFAIK the SATA Controller needs to give them that signal and that signal again comes from the Operating System so it's one big chain that needs to do it's work correctly.
#10
Hallo Zusammen,
bisher ist nur die 3CX direkt von außen, übers Internet, erreichbar.
Alle anderen Dienste sind nur über VPN zugänglich.
Jetzt stehen aber ein paar Umstellungen an und es gibt Überlegungen verschiedene Dienste direkt online verfügbar zu machen.
Es gibt eine öffentliche externe IP-Adresse.

Jetzt zum möglichen Szenario:
Auf der OPNsense den Caddy installieren, als reverse Proxy für den Zugriff auf die Nextcloud und eventuell weitere Dienste einrichten.
Für jeden Dienst ein eigenes Vlan anlegen, eins für die Telefonie (bereits vorhanden), eins für die Nextcloud, usw.

Macht das Sinn?
Und noch viel wichtiger, ist es dann überhaupt sicherer?
Was meint ihr?

Viele Grüße
Arthur