Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - GaardenZwerch

#1
Hi,
I'm looking for a network accessible device that I can hook the USB consoles of different firewalls to.

  • rack mountable
  • at least 6 USB ports
  • RJ45

Any suggestions?
Thanks a lot
#2
Hi,
I have just tried if I could recreate route based IPSec tunnels with the new configuration interface.
Everything seems to work, but fot the VTI, I have to enter an IP address in the 'Local address' field. How should I handle this when my local IP is dynamic?
In the 'General Settings' of the connection it is possible to leave this field empty.
Thanks and best regards,
Frank
#3
Hi All,
I have tried to setup a 'read-only' access to the web-gui, with the intention of allowing to allow a given user to look at the config, but not mess with it.
I find that if I give a user access to the gui pages 'without edit' for rules and NAT, he can still reorder the rules.
He can't edit Aliases or rules, but he can still select a rule, and move it around with the <- icon.
Is this expected/known/wanted?
Thanks a lot in advance,
Frank
#4
Hi,
every time that I do a static mapping in dhcp, I find myself creating a host alias for that IP immediately afterwards, because I will create one or more firewall rule(s) using this IP.
It might be doable to add a 'create host alias'  checkbox that creates an associated host alias when adding a static mapping.
Just an idea...
Thanks,
Frank
#5
General Discussion / nrpe check ipsec certificates
June 20, 2022, 04:47:51 PM
Hi,
I would like to include an nrpe check to warn me before cerrtificates in /usr/local/etc/ipsec.d/certs expire.
However those files are not readable to the nagios user and a sudoers entry to the liking of
CHMODIPSECCERTS = /bin/chmod a+r /usr/local/etc/ipsec.d/certs/*
is not working (and not desirable). Any other ideas how I could do this?
Thanks a lot
Frank
#6
Hi,
I have a weird behaviour somehow related to source NAT an route-based IPsec tunnels:

Networks A and B are behind an OPNsense Box (22.1) and should access to resources through a Tunnel.

Network B should be NATted as Network A for this. The NAT itself works.

  • I can see the packets leaving through ipsec<X>
  • I can see that the source has been correctly replaced with an address from Network A
  • Packets really originating from Network A reach the other side
  • when I try to generate traffic on the firewall itself (*), i get sendto: Permission denied
    errors
  • when I temporarily pfctl -d packets reach the other side
  • when I remove the outgoing NAT rule, packets reach the other side, with the undesired source addess

I can't see anything related in pflog, even if I enable logging in the 'permit' rule.

How do I figure out what causes the 'permission denied'? IDS/IPS is disabled.

Thanks a lot,
Frank

(*) either using ping -S Network-A-Addres, or using nc -vz -s
#7
22.1 Legacy Series / [Solved] What can replace clog?
January 31, 2022, 02:31:05 PM
Hi all,
I wonder what I have at my disposal now that clog is gone? I rely on using clog when debugging things and I use it in nrpe  monitoring scripts as well.
Thanks in advance,
Frank

#8
Hi,
I upgraded a test system to 21.7.3 and I have found that routes are not set on an openvpn client connection issued on the OPNsense appliance.
I can tcpdump on the ovpnc-n interface and see incmoming traffic.
When I add the required routes (as specified in "IPv4 Remote Network") manually, traffic starts flowing correctly.
I have reverted to 21.7.1, and all is well again, using:
opnsense-revert -kr 21.7.1 opnsense
opnsense-update -kr 21.7.1


Thanks and regards,
#9
Hi,
I have route-based IPSec tunnels from my branches to the center, and I have trouble with remote switches doing 802.1x with EAP, as  the packets seem to get too large (the switch tries to send 1472 bytes to the radius server) (see attached schema).
I have found that on the central Firewall, the (larger) requests seem to arrive on enc0, but are somewhere lost before they are passed to the ipsec<n> interface. Smaller packets go on fine, and for example mac-based auth on the same switch, against the same radius succeeds. The packets seem to disappear silently as I find no ICMP unreachables anywhere which could help PMTUD to work.

tcpdump on Central FW's enc0

14:54:46.500486 (authentic,confidential): SPI 0xc9334317: IP 172.27.5.18.1812 > 10.3.137.200.1812: RADIUS, Access-Challenge (11), id: 0x90 length: 1368
14:54:46.500516 (authentic,confidential): SPI 0xc9334317: IP 172.27.5.18 > 10.3.137.200: ip-proto-17
14:54:46.509256 (authentic,confidential): SPI 0xc06ab719: IP 10.3.137.200.1812 > 172.27.5.18.1812: RADIUS, Access-Request (1), id: 0x91 length: 414
14:54:46.511684 (authentic,confidential): SPI 0xc9334317: IP 172.27.5.18.1812 > 10.3.137.200.1812: RADIUS, Access-Challenge (11), id: 0x91 length: 819
14:54:46.535993 (authentic,confidential): SPI 0xc06ab719: IP 10.3.137.200.1812 > 172.27.5.18.1812: RADIUS, Access-Request (1), id: 0x92 length: 1368
14:54:46.536032 (authentic,confidential): SPI 0xc06ab719: IP 10.3.137.200 > 172.27.5.18: ip-proto-17
14:54:46.671825 (authentic,confidential): SPI 0xc06ab719: IP 10.3.137.200.1812 > 172.27.5.18.1812: RADIUS, Access-Request (1), id: 0x93 length: 363
14:54:47.673778 (authentic,confidential): SPI 0xc9334317: IP 172.27.5.18.1812 > 10.3.137.200.1812: RADIUS, Access-Reject (3), id: 0x93 length: 20



tcpdump on Central FW's ipsec<n> you see that the id 0x92 goes missing
14:54:46.497772 IP 10.3.137.200.1812 > 172.27.5.18.1812: RADIUS, Access-Request (1), id: 0x90 length: 414
14:54:46.500484 IP 172.27.5.18.1812 > 10.3.137.200.1812: RADIUS, Access-Challenge (11), id: 0x90 length: 1368
14:54:46.500515 IP 172.27.5.18 > 10.3.137.200: ip-proto-17
14:54:46.509260 IP 10.3.137.200.1812 > 172.27.5.18.1812: RADIUS, Access-Request (1), id: 0x91 length: 414
14:54:46.511681 IP 172.27.5.18.1812 > 10.3.137.200.1812: RADIUS, Access-Challenge (11), id: 0x91 length: 819
14:54:46.671829 IP 10.3.137.200.1812 > 172.27.5.18.1812: RADIUS, Access-Request (1), id: 0x93 length: 363
14:54:47.673775 IP 172.27.5.18.1812 > 10.3.137.200.1812: RADIUS, Access-Reject (3), id: 0x93 length: 20


looking at what goes into the firewall at the switch's side, I see that the original size of 0x90 is 1390 bytes, which get split and correctly reassembles, 0x92 is 1472 bytes, gets split and is then somehow lost at the 'end' of the tunnel.

Any ideas what I could do to get this to work?

Thanks and regards,
Frank
#10
General Discussion / [SOLVED] API flush alias
June 30, 2021, 11:57:48 AM
Hi,
I can successfully use API calls to list the content and flush an alias,
but after a few seconds, the contents gets restored 'magically'.

After a flush, the /conf/config.xml doesn't reflect that the alias should be empty, whereas a 'list' API call returns
{"total":0,"rowCount":-1,"current":1,"rows":[]}

Am I doing this wrong?

Thanks in advance,
Frank
#11
Hi,
I have observed NAT not happening on a single connection several times today.
I have "Hybrid outbound NAT rule generation" enabled but I notice that sometimes I have packets from a host that leaves the WAN interface with its private IP as source address

11:16:55.750589 IP 10.6.2.176.500 > 1.2.2.4.500: isakmp: parent_sa ikev2_init[I]
11:16:56.753706 IP 10.6.2.176.500 > 1.2.3.4.500: isakmp: parent_sa ikev2_init[I]
11:16:57.756176 IP 10.6.2.176.500 > 1.2.3.4.500: isakmp: parent_sa ikev2_init[I]

At the same time, this client accesses 'the rest' of the Internet just fine, so NAT is happening there.
When I go to the "States Dump" and kill this single state, all is fine and the client can connect.
I suspected maybe a full table, but that doesn't seem to be the case:


root@opnsense-master:~ # pfctl -si
Status: Enabled for 8 days 22:06:38           Debug: Urgent

State Table                          Total             Rate
  current entries                    34377               
  searches                     37178953282        48234.4/s
  inserts                         73241737           95.0/s
  removals                        73207352           95.0/s
Counters
  match                           79776073          103.5/s
  bad-offset                             0            0.0/s
  fragment                            7501            0.0/s
  short                                  2            0.0/s
  normalize                            460            0.0/s
  memory                                 0            0.0/s
  bad-timestamp                          0            0.0/s
  congestion                             0            0.0/s
  ip-option                        1503550            2.0/s
  proto-cksum                            0            0.0/s
  state-mismatch                      6256            0.0/s
  state-insert                          11            0.0/s
  state-limit                            0            0.0/s
  src-limit                              0            0.0/s
  synproxy                               0            0.0/s
  map-failed                             0            0.0/s
root@bo-claurive-master:~ # pfctl -sm
states        hard limit   797000
src-nodes     hard limit   797000
frags         hard limit     5000
table-entries hard limit  1000000
root@opnsense-master:~ #



This is a ha-cluster on OPNsense 21.1.5. At the moment I have only seen this happen for IKE pakets.

Thanks for any hints,

Frank



#12
General Discussion / Concurrent api calls
April 27, 2021, 10:24:31 AM
Hi,

I have seen just now for the first time the situation where an API call is done to add an IP to an Alias
(https://$fw/api/firewall/alias_util/$action/$fwtable) and the call replies "status":"done"
but the address is not really added to the Alias.
The address is added to a 13 different Aliases in a loop, and each call reported  "status":"done" but in reality the address had only been added to 10 Aliases.
This series of calls is repeated on two nodes of my ha-cluster, and on the slave it worked as expected.

Can it be a problem if concurrent api calls are made? My users log on to a portal page and the portal makes these api calls to give them access to their resources, so this can happen 'simultaneously'.

Any hints?

Thanks a lot in advance,

Frank
#13
Hi,
I have just found out that OPNsense let's me enter nonsensical destinations in the static routes dialogue.
It let me put

10.204.71.0/16

in the destination field (even overriding my larger 10.204.0.0/16 route)
Maybe a check could prevent dummies like me from sabotaging their network ;-)

Thanks
#14
Hi,
the last upgrade breaks IPSec for us. (update + reboot from 21.1.1 to 21.1.2)

This is all I see in the log. Only few udp:500 packets are transmitted.

Feb 26 10:18:43 TC-master charon[8392]: 12[KNL] creating acquire job for policy a.b.c.d/32 === x.y.z.t/32 with reqid {109}
Feb 26 10:18:43 TC-master charon[8392]: 05[IKE] <con7|73951> initiating IKE_SA con7[73951] to x.y.z.t
Feb 26 10:18:43 TC-master charon[8392]: 05[NET] <con7|73951> sending packet: from a.b.c.d[500] to x.y.z.t[500] (464 bytes)
Feb 26 10:18:43 TC-master charon[8392]: 05[NET] <con7|73951> received packet: from x.y.z.t[500] to a.b.c.d[500] (36 bytes)
Feb 26 10:19:07 TC-master charon[8392]: 06[KNL] creating acquire job for policy a.b.c.d/32 === x.y.z.t/32 with reqid {109}
Feb 26 10:19:07 TC-master charon[8392]: 11[IKE] <con7|73952> initiating IKE_SA con7[73952] to x.y.z.t
Feb 26 10:19:07 TC-master charon[8392]: 11[NET] <con7|73952> sending packet: from a.b.c.d[500] to x.y.z.t[500] (464 bytes)
Feb 26 10:19:07 TC-master charon[8392]: 11[NET] <con7|73952> received packet: from x.y.z.t[500] to a.b.c.d[500] (36 bytes)
Feb 26 10:19:31 TC-master charon[8392]: 05[KNL] creating acquire job for policy a.b.c.d/32 === x.y.z.t/32 with reqid {109}
Feb 26 10:19:31 TC-master charon[8392]: 14[IKE] <con7|73955> initiating IKE_SA con7[73955] to x.y.z.t



I did
opnsense-revert -r 21.1.1 strongswan
opnsense-update -kr 21.1
and a reboot. That didn't help.
Then, I did
opnsense-revert -r 21.1.1 strongswan
again, and now the connection comes up again.
#15
Hi all,

I have a situation where incoming traffic doesn't seem to be passed to the haproxy process.

  • the backends are fine, I see that haproxy contacts them regularly, and they are 'UP'
  • when I try to contact publicip:port from outside the OPNsense box, I see the request coming in, and I can see it 'pass', looking at pflog. Nothing shows in haproxy.log
  • sockstat shows haproxy is listening at publicip:port
  • when I do 'curl publicip:port' on the OPNsense box itself, everything works, and the request shows in the haproxy.log
  • to keep things simple, I have used 88 as the public port, so that it nothing interferes with OPNsense's GUI
  • I have a rule that accepts traffic to publicip:port on the interface where the request comes in
  • publicip is a CARP virtual IP

Any hints on what could be wrong hete?

Thanks a lot in advance,
#16
Hi All,

I have a star-shaped network with satellites connecting to the core with IPSec connections.
I I need to use Tunnel Isolation, otherwise traffic gets routed through the tunnel that I don't want there.

I have the following weird behavior: whenever I add a new satellite site (IE add a new phase2 plus corresponding phase2 entries at the core), and press 'Apply' then the existing satellites cannot initiate connections any more.
if i say: ipsec up conx-y at the satellite, I get a TS_UNACCEPTABLE error
if i say: ipsec up conx-y at the core it establishes fine.
However, after i do a 'ipsec restart' at the satellite, the satellite can initiate again.
Any idea what I can do in order to find out what's going on here?

Thanks a lot,

Frank

#17
Hi,

I have a weird behavior that looks a lot like what is described here:
https://forum.opnsense.org/index.php?topic=18238.0
but the solution does not apply.
I have a Firewall (in fact a CARP Cluster) that has its default gw on the internet, and is also connected on an internal network to several other gateways.

I have the following behavior, where traffic doesn't follow the routing table, but is directed to one specific gateway on the internal network, because of the following rules:

pass out log route-to (ix1 192.168.0.232) inet from 192.168.0.243 to ! (ix1:network) flags S/SA keep state allow-opts label "f21c75990b75b3a6112de8b7141f1e03"
pass out log route-to (ix1 192.168.0.232) inet from 192.168.0.242 to ! (ix1:network) flags S/SA keep state allow-opts label "f21c75990b75b3a6112de8b7141f1e03"

this overrides what
root@TC-master:~ # route show 172.27.5.3
   route to: 172.27.5.3
destination: 172.27.5.3
       mask: 255.255.255.255
    gateway: 192.168.0.252
suggests, which I understand.

However, I don't understand why these rules are generated, as the interface definition of ix1 does NOT have a gateway, it is on 'Auto-detect'.
Both gateways are defined in system/gateway/single, and I have defined static routes that use them. The output of netstat is coherent with what I have defined in the GUI.
root@TC-master:~ # netstat -nr4l | grep "default\|^172"
default            1.2.3.4      UGS    155677065   1500        ix0
172.16.0.0/12      192.168.0.252      UGS     4229777   1500        ix1
172.27.6.0/28      192.168.0.232      UGS           0   1500        ix1
172.27.6.16/28     192.168.0.232      UGS       39579   1500        ix1

I have tried adding a /32 route just for testing purposes, but of course the pf 'route-to' rule has precedence.
I have looked in the config.xml file for the occurrences of the IP  192.168.0.232 and looked if the object is referenced somewhere else, but it is not. The IP occurs once, for the definition of the gateway, and is used exactly twice, giving the two wanted static routes.

So, what causes the two 'route-to' rules? Can I use the generated label to further look for this?
Thanks a lot for any hints,
Frank


#18
20.7 Legacy Series / configd_ctl.py eating memory
December 18, 2020, 07:37:32 AM
Hi,

out of my >50 opnsense boxes there is one that regularly runs out of memory over the course of around two weeks, to the point where they have to powercycle it on site.

Yesterday, I got a chance to look after it before it was to late.

top shows me its a python process:

last pid: 22346;  load averages:  3.76,  2.84,  1.89             up 8+10:04:59  17:52:52
53 processes:  4 running, 49 sleeping
CPU: 30.1% user,  0.0% nice, 12.7% system,  0.2% interrupt, 56.9% idle
Mem: 5423M Active, 62M Inact, 776M Laundry, 1226M Wired, 697M Buf, 220M Free
Swap: 8192M Total, 3754M Used, 4438M Free, 45% Inuse

  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
65714 root          1 103    0  5667M  5136M CPU7     7 100:34  97.62% python3.7
38963 root         17  41    0  1122M    14M sigwai   7   0:02   0.00% charon
84282 root          1  20    0  1050M  4360K select   3   0:27   0.00% sshd


and here are the python processes:

root@lesc:~ # ps aux|grep python
root   65714  99.0 64.4 5803116 5258816  -  R     9Dec20    99:41.55 /usr/local/bin/python3 /usr/local/opnsense/service/configd_ctl.py -e -t 0.5 system event config_chang
root    7298   0.0  0.1   20540   10716  -  S    17:47       0:00.06 /usr/local/bin/python3 /usr/local/opnsense/service/configd_ctl.py -e -t 0.5 system event config_chang
root   13583   0.0  0.3   58004   21716  -  I     9Dec20     0:12.39 /usr/local/bin/python3 /usr/local/opnsense/service/configd.py console (python3.7)
root   20151   0.0  0.0   31876       0  -  IWs  -           0:00.00 /usr/local/bin/python3 /usr/local/opnsense/service/configd.py (python3.7)
root   27623   0.0  0.1   21076   11228  -  S    17:47       0:00.07 /usr/local/bin/python3 /usr/local/opnsense/scripts/syslog/lockout_handler (python3.7)
root   93887   0.0  0.2   37548   16812  -  Ss    9Dec20   825:04.49 /usr/local/bin/python3 /usr/local/opnsense/scripts/netflow/flowd_aggregate.py (python3.7)
root   65791   0.0  0.0 1067288    2932  0  S+   17:51       0:00.00 grep python


So, I'm wondering what "configd_ctl.py -e -t 0.5 system event config_chang" does, why there's two of them, and how I can find out what's going wrong. At the time I got to take a look, the systemlog was full off 'cannot allocate' entries.

The system has 8GB Ram + 8GB Swap. No suricata, no flowd. It is running 20.7.6 but it has shown this with earlier 20.7.x releases.

Thanks for any hints,
#19
High availability / Scheduled Failover
December 10, 2020, 10:20:44 AM
Hi,
I would very much like a way to schedule failovers at night, when my users (and I) should be sleeping.
Searching has brought up the following:
sysctl net.inet.carp.allow=0
to disable carp, and
sysctl net.inet.carp.allow=1
to re-enable it.

It works in my lab setup, but before going any further I'd like your input: is there another, preferred way?
How would you set it up, using
at
?

TIA,
Frank
#20
Hi,
I have a two node HA-Cluster that operates as a VPN gateway.
It has a setup for mobile clients (roadwarriors)
several tunnels to other (fixed) locations
and an OpenVPN server.

I have experienced several times now that IPSec restarts when I modify the config of the OpenVPN server, when I hit apply.
This is the syslog. I don't have anything useful in the ipsec.log, as it overwrites the log when it restarts.
Is this related to the /usr/local/etc/rc.newwanip entries?
The weirdest thing is that I have built a lab (qemu) with the exact same setup, same OPNsense (20.7.5), same config (just rename interface names), including the cluster, a router and a client, and I can't reproduce it there.
Thanks a lot for any clues,

Frank


Dec  3 10:24:43 TC-master configctl[71989]: event @ 1606991083.41 msg: Dec  3 10:24:43 TC-master......
config[46965]: config-event: new_config /conf/backup/config-1606991083.412.xml 
Dec  3 10:24:43 TC-master configctl[71989]: event @ 1606991083.41 exec: system event config_changed
Dec  3 10:24:46 TC-master sshd[24400]: Accepted publickey for root from 172.30.0.250 port 51436 ssh2: RSA ....
Dec  3 10:24:46 TC-master sshd[24400]: Received disconnect from 172.30.0.250 port 51436:11: disconnected by user
Dec  3 10:24:46 TC-master sshd[24400]: Disconnected from user root 172.30.0.250 port 51436
Dec  3 10:26:03 TC-master webgui[46965]: /index.php: Session timed out for user 'root' from: 172.27.5.3
Dec  3 10:26:03 TC-master webgui[46965]: /index.php: Session timed out for user 'root' from: 172.27.5.3
Dec  3 10:26:06 TC-master webgui[46965]: /index.php: Successful login for user 'root' from: 172.27.5.3
Dec  3 10:26:06 TC-master webgui[46965]: /index.php: Successful login for user 'root' from: 172.27.5.3
Dec  3 10:27:47 TC-master kernel: ovpns1: link state changed to DOWN
Dec  3 10:27:47 TC-master configctl[71989]: event @ 1606991267.15 msg: Dec  3 10:27:47 TC-master.....
config[46965]: config-event: new_config /conf/backup/config-1606991267.1531.xml 
Dec  3 10:27:47 TC-master configctl[71989]: event @ 1606991267.15 exec: system event config_changed
Dec  3 10:27:49 TC-master kernel: pflog0: promiscuous mode disabled
Dec  3 10:27:49 TC-master kernel: pflog0: promiscuous mode enabled
Dec  3 10:27:49 TC-master kernel: ovpns1: link state changed to UP
Dec  3 10:27:49 TC-master config[46965]: /vpn_openvpn_server.php: OpenVPN server 1 instance started on PID 21866.
Dec  3 10:27:49 TC-master opnsense[71739]: /usr/local/etc/rc.newwanip: IPv4 renewal is starting on 'ovpns1'
Dec  3 10:27:49 TC-master opnsense[71739]: /usr/local/etc/rc.newwanip: Interface '' is disabled or empty, nothing to do.
Dec  3 10:27:50 TC-master kernel: pflog0: promiscuous mode disabled
Dec  3 10:27:50 TC-master kernel: pflog0: promiscuous mode enabled