Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - dennis_u

#1
Hey there,

today, I updated an OPNsense box (DEC hardware).

What is not working:

  • the web interface is not reachable anymore (no connect)
  • SSH login is not possible anymore (auth failed)

What is working:

  • Traffic in general (luckily!!)
  • IPSec site2site, OpenVPN dial-in incl. authentication
  • SNMP requests, SNMP info states 13.2-RELEASE-p7 FreeBSD 13.2-RELEASE-p7 stable/23.7-n254871-d5ec322cffc SMP amd64
  • shutdown and start via hardware buttons

The overall goal was to get to 24.1

Do you have any approaches to troubleshoot this? The box is 180 kilometers away, the local access should be the last resort.
#2
Status update:

Indeed, with the last revision (24.1.4, 31cd002eb), there is no error with DHCP relay through VPN tunnels, but: it is not working. I assume that it sends the requests with the virtual tunnel IP (couldn't troubleshoot this branch office any more). The current workaround is a local DHCP server. I'm doing more tests with the next branch office.

On the long run, I'm looking forward to 24.7 with the new implementation. In case of pre-production tests, I have to change the software channel to type=Development in the WebGUI, right?

Thanks team for your excellent work.
#3
Quote from: franco on March 20, 2024, 10:20:44 PM
Though what I don't understand is why you would select "ipsec1" for the relay. You want to relay a physical network to a server address routed somewhere, but not add a tunnel which doesn't have any directly attached clients?

To be more precise, I do not choose ipsec1, the OPNsense does. I choose "LAN" to listen to DHCP broadcasts, but a route lookup to the DHCP Server seems to yield ipsec1 as outgoing interface. ipsec1, by the way, is a route-based ESP tunnel interface. Maybe, it can try to change it to an policy based IPSEC tomorrow. The route lookup yields interface WAN in this case, which should be good for the relay. But I assume it is done for the source IP, which should not be the WAN IP in my case, but a local IP.

In general, it is a common requirement for small offices to catch DHCP requests on LAN side and send it through a VPN tunnel to a central DHCP server.

In the worst case I have to install RPis in the local networks with an ISC DHCP to relay through the tunnel.
#4
Hey there,

we have to roll out a handful branch offices. There is nothing IT related stuff, clients only. They wish to control the clients from the DHCP server in the data center. I hardly try, but everytime I set a DHCP Relay IP, which is routed via VPN, it throws: "Unsupported device type 131 for "ipsec1"". If I take a DHCP server in "the internet" or locally, it works. I cannot see the purpose of an device type for an DHCP relay...  ???

The new KEA DHCP as alternative lacks the relay option or I cannot find it.

Do you have any idea?
#5
Actually, it was @pmhausen :-)

But, after you clarified your request, I would also recommend to add a rule with the desired SRC/DEST/PORT combination, log it, but untick "Apply the action immediately on match."

The rule is used and logged, but it is not taken as the last resort of action and is processed also by the following rules.
#6
Additional question: may it be helpful in this case to change the tunnel into a route based tunnel instead of a policy based tunnel?
#7
Hi,

I'm not sure, if I understand you correctly. If your question is that specific rules get a marker or flag in the log file to filter for, the answer is no.

But, every rule has its own rule id (watch the row "rid", here it is like "02f4bab03..."). This id will be also transmitted to a syslog server.

But yes, it would be great, if the description/label would be sent along with syslog/plain view.

encore:
You can label your rules (it is called "Description" in the rule's definition page). Put in "**SUSPECT** some other description" and filter in the live view with "label contains **SUSPECT**"

Hope that helps
#8
We usually configure a loopback management interface with an management IP for remote sites. This IP is used for HTTPS, SSH and SNMP from the main site to the remote device. This works well through the VPN tunnel.

However, this does not work well for the reverse way. The use case is: the OPNsense initiates a connection to the syslog server. The expectation is: the OPNsense box takes the MGMT interface as source interface. The reality is, that the WAN IP is taken.

I attach an image to make that more clear. How can I achieve the goal ??? ?
#9
22.7 Legacy Series / Re: IPSec tunnels do not re-initiate
November 17, 2022, 08:36:51 PM
Quote from: mimugmail on November 12, 2022, 07:42:55 AM
Then set the type on start immediate on one site in addition to this

Unfortunately that didn't work either.

But news about this: today we installed version 22.7.7 on a remote OPNsense. We also had to reboot the remote firewall to do this.
Since the update, the tunnel has been running as desired, we have also downconfigured the config attempts bit by bit in order to destabilize the tunnel again for the purpose of finding the cause - nothing. Currently it runs with the default IKEv2 settings, but with DPD. I can delete the SAs and stop/start the daemons, the tunnels are recreated immediately (respectively after the DPD timeout).

Theory 1: the software update to 22.7.7 did something
Theory 2: certain IPSec config changes only become active after a reboot

Next weekend I'll try to update a machine with an untouched config to 22.7.7, maybe I'll get a clue, if it related to the software version.
#10
22.7 Legacy Series / Re: IPSec tunnels do not re-initiate
November 11, 2022, 02:37:56 PM
Quote from: pmhausen on November 11, 2022, 04:22:39 AM
What is your close action set to?

Per default to None, but we tried every single option of this parameter, but no luck so far.
#11
22.7 Legacy Series / Re: IPSec tunnels do not re-initiate
November 10, 2022, 11:24:07 PM
Quote from: anicoletti on November 10, 2022, 07:07:43 PM
We also had issues with the IPSEC Tunnels not re-establishing even with DPD setup. We ended up using Monit to monitor the IPSEC tunnels and restart if the tunnel ping failed.

Good workaround. Unfortunately, we have no host on the far end to monitor the tunnel and to reset it. And to be honest, there must be an out-of-the-box solution based on the OPNsense IPSec configuration. There can be always a wire interruption, a firmware upgrade, you name it, which yields into a tunnel termination. There has to be a solution based on StrongSwan.
#12
22.7 Legacy Series / Re: IPSec tunnels do not re-initiate
November 10, 2022, 11:19:23 PM
Quote from: mimugmail on November 10, 2022, 08:33:16 PM
Set keyingtries to -1 does the trick

It sounded promising, but neither setting it on the far end, nor on both ends brought back the tunnel. Only hitting the start button at the page "Status overview".
#13
22.7 Legacy Series / IPSec tunnels do not re-initiate
November 10, 2022, 03:44:02 PM
Situation:
main location: dns name to IP, OPNsense 27.7
remote location 1: dynamic IP, Juniper SRX
remote location 2&3: dynamic IP, OPNsense 27.7

If we restart the StrongSwan service at the main location or boot the OPNsense (because of updates, etc.), the remote OPNsenses do not re-establish IPSec connections, the SRX location does.

Looking to the remote machines: under VPN > IPsec > Status Overview there is a red cross in the P1 status. Only by clicking on the play button (on the right side) does the tunnel come back immediately or by a reboot.
We have now tested different configs in the connection method and with DPD, but did not have the desired success.

What does the IPSec config have to look like in order to automatically try to establish a connection again after the tunnel has been aborted/terminated? Any ideas (we have an temporary access to the remote machines, even if the tunnel are down)?
#14
Hardware and Performance / Re: OpnSense on WatchGuard
September 15, 2022, 03:08:27 PM
as promised:


sysadmin@server1:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 10.10.26.10 port 5001 connected with 10.10.26.2 port 35810
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0109 sec  1.04 GBytes   892 Mbits/sec
[  2] local 10.10.26.10 port 5001 connected with 10.10.26.2 port 14193
[ ID] Interval       Transfer     Bandwidth
[  2] 0.0000-10.0022 sec  1005 MBytes   843 Mbits/sec
^CWaiting for server threads to complete. Interrupt again to force quit.
^Csysadmin@server1:~$ iperf -s -w1K
WARNING: TCP window size set to 1024 bytes. A small window size
will give poor performance. See the Iperf documentation.
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 2.25 KByte (WARNING: requested 1.00 KByte)
------------------------------------------------------------
[  1] local 10.10.26.10 port 5001 connected with 10.10.26.2 port 24734
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0310 sec  28.0 MBytes  23.4 Mbits/sec

^Csysadmin@server1:~$ iperf -s -w64K
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (WARNING: requested 64.0 KByte)
------------------------------------------------------------
^Csysadmin@server1:~$ iperf -s -w400K
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  416 KByte (WARNING: requested  400 KByte)
------------------------------------------------------------
[  1] local 10.10.26.10 port 5001 connected with 10.10.26.2 port 36810
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0047 sec   946 MBytes   793 Mbits/sec


Copy of 3 GB file with random data via SSH:

sysadmin@perfbox:~$ scp large_file sysadmin@server1:~
sysadmin@server1's password:
large_file                                                                                                             100% 3000MB  36.8MB/s   01:21   
sysadmin@perfbox:~$ scp large_file sysadmin@server1:/dev/null
sysadmin@server1's password:
large_file                                                                                                             100% 3000MB  36.4MB/s   01:22   


Test setting: Perfbox is a small flexible wearable box, server1 is a virtual machine. The is only one hop between both (the OPNsense of course). The OPNsense itself is quite vanilla, no IPS or similar services are started.

edit:
Another test with IPS enabled:
Drop down to 660 to 750 Mbits/sec (top output is attached)
/edit

============
Overall rating:
- the LCD plugin works great
- the fans are noisy
- the interface assignment are odd (no alignment between outside labels and emX).
- shutdown is not possible, it reboots
- the serial is not working after the OPNsense kernel boots (the BIOS is visible).
#15
Hardware and Performance / Re: OpnSense on WatchGuard
September 06, 2022, 08:39:14 PM
Quote from: Neloas on September 05, 2022, 02:29:44 PM
Have you done any throughput testing on this? This really has me interested especially if it can handle 1Gbps symmetrical

Once we replace the single gateway with the cluster, I'll run an iperf test and share the results.