Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Hunduster

#1
Hello everyone,

I currently have a problem with IPsec VPN under 24.7.6.

My IPSec tunnels go offline after about 24 hours. The log only shows gnoring IKE_SA setup from 58.147.46.77, per-IP half-open IKE_SA limit of 5 reached for all my 5 tunnels.

Also, the service cannot be restarted via the GUI, so I have to restart the entire node once.
Nothing has changed on my tunnels since the update, none have been added, none have been removed and no values have been changed.

Is the error already known somehow? Unfortunately I could not find anything about it.
#3
Quote from: StephanBiegel on April 08, 2024, 03:14:41 PM
Da hier auch Mailserver auf den Kisten in der DMZ laufen, müssen die immer über das Richtige GW und mit der richtigen Sende IP hinaussenden. Sonst lehnen viele SMTP Server ab, da die IPS nicht als MX im DNS stehen.

Musste dafür auch noch die automatischen ausgehenden Regeln deaktivieren und manuelle erstellen mit der jeweiligen externen IP als Maskierung. Klappt jetzt gut. Noch Geoblocking davor gehauen und schön ist :)

Das ist ja unabhängig voneinander. Das Regel- und NAT-Werk arbeitet ja von oben nach unten. Du machst also für deine Mailserver SNAT auf die entsprechend richtige IP. Hier reicht ja Port 25 ausgehend, musst ja nicht den ganzen Server über eine feste IP "natten". Die dafür notwendige FW-Regel geht dann auch nur auf das jeweilige Gateway.

Alle anderen Regeln auf die WAN Gruppe, damit die beim Ausfall eines Gateways noch raus können - somit wäre ggf. nur der Mailserver offline bzw. nicht fähig zu senden.

Wie gesagt ist nur noch ein Tipp  ;)
#4
Quote from: StephanBiegel on April 08, 2024, 09:40:12 AM
Da die ersten IPs ja bereits gingen, habe ich hier die Gateways weggelassen.
Eigentlich sollte die OPNsense anhand des Subnetzes das passende Gateway automatisch auswählen, sodass die Einstellung hier nicht von Nöten ist. Aber es ist OK, ich hab es auch so gemacht.

Denk ggf. noch dran, eine WAN Gruppe einzurichten, sodass deine internen Systeme auf die Gruppe maskiert werden (NAT) und raus können, sollte eines der GWs mal ausfallen. Die FW-Regeln musst du dann entsprechend auch auf die WAN Gruppe setzen.
#6
Also ich habe ein sehr ähnliches Szenario an einem Colt Glasfaser-Anchluss. Hier habe ich zwei /29er Netze.

Auf dem WAN-Interface habe ich die erste IP aus Subnetz A angelegt.

Alle weiteren IPs aus Subnetz A und Subnetz B sind bei mir als CARP-Adressen hinterlegt, was am Cluster liegt und analog zu deinen Virtual IPs ist. Soweit also alles richtig. Bei den CARP/VIPs habe ich dann jeweils das passende Gateway für die Subnetze hinterlegt.

Für beide Subnetze ist jeweils das passende Gateway angelegt, wobei bei das Gateway aus Subnetz A das Upstream Gateway ist und das Gateway aus Subnetz B Upstream UND "Far Gateway" ist. Letzteres bedeutet, dass die OPNsense nicht meckert, dass eben dieses Gateway nicht aus dem Netz von Subnetz A stammt, welches ja fest an das Interface gebunden ist.

Somit sollte Gateway B online gehen und die IPs aus Subnetz B sollten erreichbar sein.



#7
Hallo Stephan,

bezüglich deines zweiten Public Subnetz: ist dieses vom selbigen Provider am selbige Anschluss?

Beide Subnetze benötigen in jedem Fall ein "Upstream" Gateway auf der OPNsense, da die Pakete sonst nicht geroutet werden können.
#8
It's always the little things that make a big difference! :D I have now been able to find out exactly what the problem was: MTU.

With our old firewall, I had set up an MTU of 1412 on the Vodafone connection. I had stupidly adopted this with OPNsense.
Now that I have set the MTU back to 1500, it is stable on all firewall nodes
#9
Quote from: Hunduster on March 01, 2024, 05:21:26 PM
You won't believe it, but restarting the master node solved the problem. I double and triple checked everything for two days and then the  ::)

No, it has not been solved. Now, after a few minutes of mastering, I have the same error again :-(
#10
You won't believe it, but restarting the master node solved the problem. I double and triple checked everything for two days and then the  ::)
#11
Hello everyone,

I have a problem with one of my mail gateways behind an OPNsense and two Internet connections.

WAN 1 - COLT fiber
WAN 2 - Vodafone DOCSIS

Both connections have fixed IP addresses. On each OPNsense, a static IP is entered on the WAN interfaces and the remaining IP addresses are created as CARP.

I have two mail gateways behind the firewall, where port 25 is forwarded to the gateways via DNAT. One CARP IP is forwarded to gateway 1 and one CARP IP to gateway 2. The rules are otherwise identical.

The whole thing works perfectly with the COLT connection. With the Vodafone connection, I cannot establish a TLS connection, only plain. With various TLS checks I always get the same error message: Cannot convert to SSL (reason: SSL wants a read first)

So something is really messing up here.

I have already deactivated all possible security features such as IPS/IDS and Zenarmour. It's no use. The logs also show nothing. Firewall and DNAT rule let all packets through.

I'm slowly running out of ideas where else to look.
#12
High availability / Re: HA with two public subnets
February 23, 2024, 11:00:36 AM
Anyone?

I really have a comprehension problem right now.

Just to explain that again:

We have been assigned two different /29 subnets on a fiber optic connection by our provider. Each subnet has its own gateway.

The aim is for both nodes in the cluster to receive the IP addresses via CARP during failover.

I'm really at a loss as to how to implement this, even with the two different gateways
#13
High availability / HA with two public subnets
February 22, 2024, 01:03:09 PM
Hello everyone,

I need your help for once.

I have two OPNsense running in an HA cluster. Both nodes are connected to a fiber optic connection.

On this connection we have two public 29 subnets, each with a gateway IP in the respective subnet.

Node 1 has the first public IP from subnet 1 and is running.
Node 2 has the first public IP from subnet 2 and is not coming out.

Each node shows its gateway as online but only Node 1 has Internet access.

Node 2 can only access the Internet if I deactivate the gateway from subnet 1, even though it is offline and I have set Auto Detect to Gateway 2 on the interface.

Can anyone explain why this is the case?
#14
Fun Fact:

I am actually very sure that I never set the check mark here ;-)

So something must have changed in my eyes. But the whole thing makes sense and that's why I'm not angry. The Wife Acceptance Factor has just shifted significantly today  ;D
#15
OK...bizarre

When both nodes are active, the CPU load goes up alternately on both machines. While I run continuous pings on the respective node IP and the VIP, I repeatedly have dropouts on the three IP addresses.

If I restart a node, it doesn't matter which one, the CPU load on the remaining node goes back down to the normal range. HA generally seems to work, as the backup becomes the master when I restart Node1. As soon as the restarted Node1 is online again, Node1 becomes the master again and everything seems to run normally for about 3 minutes. Then the game starts all over again: CPU load increases on both nodes and the network dropouts start again.

Now that I have started the Wireshark once, I think I have found the problem:

Obviously there were too many broadcasts between the subnets. In my case, the "Enable CARP Failover" checkbox was missing in the mDNS repeater. Now that I have activated this, there is no problem.

The STP also turned out to be an error. The Switch had always briefly blocked the ports of the booting node, but then released them again.