Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - schnipp

#1
No, powerd wasn't the problem. I tried many configurations without success. In retrospect, it looks to me like a timing issue in the FreeBSD kernel, which is probably causing problems with the default ACPI configuration of the UEFI firmware.
#2
All good things come to those who wait.

Half a year ago, I conducted further tests with Opnsense, PFsense, FreeBSD (vanilla), and Linux. Both Opnsense and PFsense had the same data throughput issue, while FreeBSD and Linux did not. I couldn't really determine the cause. As a last resort, I've now turned my attention to the UEFI firmware configuration.

Bingo! In the UEFI firmware of the "Supermicro A2SDi-4C-HLN4F" motherboard, there is an option called "ACPI 3.0 T-States." Disabling this option restores almost full throughput. I couldn't find any information about current versions of FreeBSD supporting ACPI 3.0 T-States. Since the issue didn't occur with the vanilla version of FreeBSD, but did occur with both open-source firewalls, it's difficult to prove whether this option is the cause or just a symptom.

So if anyone has similar problems, they should keep an eye on this or similar power management options.


Edit: The above option was already present and enabled in the first BIOS version when I purchased the motherboard in 2017. Unless there were any changes to the ACPI tables during that time, the bug may have been introduced or triggered with a later version of FreeBSD.
#3
24.7, 24.10 Series / Re: Squid: segmentation fault
September 03, 2025, 05:36:06 PM
Quote from: franco on September 01, 2025, 09:14:09 AMThe silence in the GitHub plugin repo regarding the issue disagrees with your blanket statement, but I'm not here to challenge you on a local issue that may persist.

The reason for this is likely that the GitHub issue doesn't reflect the observations I started the thread with. While the GitHub entry discusses an unclean shutdown, this thread is about startup problems with the proxy. The root cause may be the same. Since the problem I observed wasn't always reproducible, root cause analysis wasn't easy either. The current observation (checked recently) is that the Squid proxy sporadically crashes with a segfault, but restarts automatically. The previously mentioned command "squid -k parse" also ends with a segfault.


Quote from: franco on September 01, 2025, 09:14:09 AMPersonally, I don't like the fact that people come and complain about issues but once they are gone do not bother to give useful feedback. It is what it is, though.

Calm down, with posts like these, the community forum is sure to be successful in the long run. There can be countless reasons why someone stops responding to a thread. In the past, at least one of them was that the email notification wasn't working reliably.
#4
Quote from: mnaim on July 31, 2025, 11:58:48 AMRight - https://github.com/opnsense/core/issues/9021

I was surprised that the GitHub issue was closed. I can't tell from the ticket content that the bug has been fixed. Does anyone have any further information?
#5
24.7, 24.10 Series / Re: Squid: segmentation fault
August 31, 2025, 11:13:40 AM
No, the issue of "segmentation fault" still persists in Opnsense 25.7.1. In case squid crashes it will automatically be restarted. So, despite the log entries I never observed any interruptions of the squid service anymore.
#6
Quote from: OPNenthu on August 11, 2025, 08:28:53 AM[...]
I'd also like to change the MAC on all internal parent interfaces as a security measure against applications leaking firewall OUI / manufacturer info to the outside.
[...]

Why do you think changing the MAC address improves security? Of course, the original MAC address reveals the manufacturer of the network card or motherboard, but only to devices on the same Layer 2 network segment. MAC addresses are not transmitted across routers.

For a long time, I used a custom, locally managed MAC address for the WAN interface to avoid problems with my ISP if my hardware changed. This configuration worked perfectly for many years and with several different ISPs, until a few weeks ago.

What happened? My ISP, Deutsche Glasfaser, was conducting maintenance work in our area. During this exact time, the internet connection went down overnight. Absolutely no communication was possible; the remote end simply didn't respond to any of the sent Ethernet frames. It was a tedious process to resolve the issue, especially because the non-technical support provided conflicting and incorrect information :-(.

After a thorough technical investigation, I discovered that the ISP was using similar MAC addresses for its own infrastructure after completing maintenance. Apparently, the ISP allocated the MAC address range, which also included my custom address, and applied an inbound filter for the CPE to prevent MAC address collisions. After updating the MAC address for the WAN interface, everything worked again.
#7
I just updated from Opnsense version 25.7 to 25.7.1 and noticed that the IPsec VPN on my smartphone is no longer working. The IPsec widget in the dashboard shows "no phase 1 configured", while everything looks fine in the configuration section.

On the console, the "swanctl --list-conns" command confirms that no connections are configured. It seems as if Opnsense is forgetting to derive a correct IPsec configuration from its global configuration file.

After switching back to the previous Opnsense version (v.25.7), everything is working fine again. Has anyone observed a similar issue?
#8
In my eyes it's a bug since wireguard does not need a tunnel endpoint address. My wireguard configuration does not have any configured tunnel endpoint. In the past this worked flawlessly.
#9
During my vacation, I used my roadwarrior Wireguard VPN (Android smartphone -> Opnsense) again after a long period of inactivity. I noticed that IPv6 network traffic wasn't being routed to the internet via the tunnel. IPv4 network traffic, however, worked without any issues. I hadn't changed the configuration in ages. There were only a few Opnsense updates.

I'm currently using Opnsense version 25.7. I tried to reproduce the problem with older Opnsense and older kernel versions. The problem with IPv6 network traffic also exists in the oldest ZFS snapshot (v. 25.1.9_2). So the error must have crept in with an earlier Opnsense update, since I hadn't changed the configuration, and IPv6 in the tunnel had definitely worked in the past.

Further analysis gave the following results:
  • A packet capture of the Wireguard virtual interface showed that the IPv6 network traffic from the smartphone arrived correctly at the Wireguard virtual interface, meaning it passed through the tunnel correctly.
  • However, the firewall logs did not indicate that IPv6 packets were arriving. Therefore, the problem appeared to be that the IPv6 packets were not reaching the firewall.


Further analysis on the console provided more insight:

wg2: flags=10080c1<UP,RUNNING,NOARP,MULTICAST,LOWER_UP> metric 0 mtu 1420
description: My_wireguard_interface (opt15)
options=80000<LINKSTATE>
groups: wg wireguard
nd6 options=9<PERFORMNUD,IFDISABLED>

According to "ifconfig", IPv6 was disabled on the Wireguard interface. Consequently, the packets were filtered on the interface before being forwarded to the firewall. Manually removing the "IFDISABLED" flag in the "nd6 options" or manually assigning an IPv6 address to the interface (which automatically removes the flag) re-enables IPv6, and the network traffic is correctly routed to the internet again via the Wireguard tunnel.

This appears to be a bug in Opnsense, causing the interface to be incorrectly configured. If Wireguard is configured with IPv6, the flag must not be set. Accordingly, Opnsense must remove the flag from the interface if at least one of the following conditions is met:
  • In the Wireguard instance configuration, the "Tunnel address" field contains at least one IPv6 address.
  • The "Allowed IPs" field of at least one associated peer contains at least one IPv6 address.

In case only condition 2 meets (like in my case), the flag is set and IPv6 traffic is filtered. In my eyes this needs to be corrected in future updates.
#10
If you use a dedicated USB stick and copy the config to the first FAT32 partition you should not run into trouble (for details see here). If you like, you can test this by starting the live system in Virtualbox in your desktop environment. Be sure to always have a recovery strategy to turn back to the old installation in case of unforeseen issues
#11
Quote from: Patrick M. Hausen on March 09, 2025, 10:44:14 AM[...]
So we are back to a mystery.

I got Gigabit throughput on that same board with OPNsense virtualised in bhyve and two network interfaces passed through. Couple of months ago was last I checked.

I have no idea what else I can do. I can test the following scenarios with reference to the thread mentioned (link). After that, I'll probably have to contact the FreeBSD community.

  • Create an SPD entry for IPv6 instead of IPv4 and measure the throughput on the LAN
  • Upgrade the server to IPv6 and compare the data throughput between IPv4 and IPv6

What I don't understand, however, is why others don't seem to have these problems. The board seems to be used frequently.
#12
Quote from: Patrick M. Hausen on March 08, 2025, 08:19:58 PMI read the last post as installation on a Supermicro A2SDi board without a hypervisor.

I know the board can easily achieve gigabit speeds when routing.

Yes, it does. Running a Linux live system with IP routing, maximum throughput around 110MB/s is reached.

Quote from: Patrick M. Hausen on March 08, 2025, 08:19:58 PMSo the question is: which services are you running apart from routing, pf and possibly NAT?
[...]

My Opnsense is running mostly the standard services, extended with

  • Nut
  • Squid Forward Proxy (not involved in performance degradation between client and server)
  • UDP Broadcast Relay

Shutting down non-essential services and kernel modules increases performance, but does not bring back maximum throughput. It looks like the problem is still the known old bug (see here).

However, there is one difference between the old installation (v.24.7.12-2) and the new one (v.25.1.2): when deleting all entries in the SPD (IPsec) and shutting down the Netflow aggregator, the maximum throughput came back to about 100MB/s. In the new installation, the throughput only increases from 50MB/s to about 70-75MB/s.

When I boot the Opnsense live system (v.25.1.2), do the minimal network interface configuration (server: native ethernet interface ix3; client: VLAN on ethernet interface ix2) and create a firewall rule to allow SMB connections from the client to the server, the throughput is about 110MB/s. As soon as I create an additional IPsec rule, the throughput drops to about 80MB/s.

I still don't know how to figure this out.
#13
Quote from: meyergru on March 08, 2025, 06:51:33 PMAny particular reason why you use Virtualbox?
[...]

If it is only being used for evaluation, then fine.

Sorry, I didn't express myself clearly in the first post. My Opnsense runs bare metal :

  • Board: Supermicro A2SDi-4C-HLN4F
  • Memory: 8GB
  • Storage: 120GB SSD

Virtualbox was just the environment to test the migration:

  • Checking SSD backup for recoverability, in case something goes wrong
  • Installation together with configuration restoration

During the test installation in Virtualbox, it turned out that the configuration import does not work properly if I place the configuration on an additional partition of the installation media. As a result, I was able to adapt the installation procedure to reduce the downtime to a minimum.
#14
Today, I migrated my Opnsense from version 24.7.12-2 (UFS) to 25.1.2 (ZFS). It is a complete new installation on the previously deleted SSD. The installation went smoothly. The few manual steps before starting the installation were importing the previously saved configuration and a few additional configuration files.

Board: Supermicro A2SDi-4C-HLN4F
RAM: 8GB

Advantages:
  • Installation went smoothly
  • System starts much faster than the old system

Disadvantages:
  • Still poor data transfer rate between different subnets/VLANs, around 50-80 MB/s over a gigabit connection 😞

#15
Based on the topic segmentation fault I plan do do a clean installation with automatically importing the config file. I want to simulate all in virtualbox before to get everything smooth during the real installation:

1. I tried to boot the Opnsense image directly in Virtualbox. But the image seems to be incompatible and it looks like a general problem of Virtual Box not supporting all scenarios and image formats. However, I created an USB stick with the image for booting the VM.

2. I created and additional FAT32 partition on the USB stick (GPT Type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7). Then I copied the latest unencrypted configuration backup to /conf/config.xml

3. When using the configuration importer during installation it is not possible to import the file. Neither the correct device "da0" or the partition "da0p5" are accepted. Mounting the partition manually in the Opnsense shell works. Does anybody know what is the reason or what kind of devices the importer accept?



Edit:
=====

  • It looks like the importer unexpectedly stops, in case it finds a swap partition, or?
  • I did the following workaround: I manually copied the latest config to the backup folder and restored a backup within the live system. Afterwards I started the installer. This works.



Edit 2:
=======

  • I have a further question: In case I restore a config without all relevant plugins installed yet and install the plugins afterwards. Are the configuration parts of the plugins automatically applied or lost?