Based on the topic segmentation fault (https://forum.opnsense.org/index.php?topic=45943.0) I plan do do a clean installation with automatically importing the config file. I want to simulate all in virtualbox before to get everything smooth during the real installation:
1. I tried to boot the Opnsense image directly in Virtualbox. But the image seems to be incompatible and it looks like a general problem of Virtual Box not supporting all scenarios and image formats. However, I created an USB stick with the image for booting the VM.
2. I created and additional FAT32 partition on the USB stick (GPT Type: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7). Then I copied the latest unencrypted configuration backup to /conf/config.xml
3. When using the configuration importer during installation it is not possible to import the file. Neither the correct device "da0" or the partition "da0p5" are accepted. Mounting the partition manually in the Opnsense shell works. Does anybody know what is the reason or what kind of devices the importer accept?
Edit:
=====
- It looks like the importer unexpectedly stops, in case it finds a swap partition, or?
- I did the following workaround: I manually copied the latest config to the backup folder and restored a backup within the live system. Afterwards I started the installer. This works.
Edit 2:
=======
- I have a further question: In case I restore a config without all relevant plugins installed yet and install the plugins afterwards. Are the configuration parts of the plugins automatically applied or lost?
Today, I migrated my Opnsense from version 24.7.12-2 (UFS) to 25.1.2 (ZFS). It is a complete new installation on the previously deleted SSD. The installation went smoothly. The few manual steps before starting the installation were importing the previously saved configuration and a few additional configuration files.
Board: Supermicro A2SDi-4C-HLN4F
RAM: 8GB
Advantages:- Installation went smoothly
- System starts much faster than the old system
Disadvantages:- Still poor data transfer rate between different subnets/VLANs, around 50-80 MB/s over a gigabit connection 😞
Any particular reason why you use Virtualbox? Generally speaking, emulation of network hardware always tends to be somewhat slow, given that there are obviously many context switches involved. With KVM-based virtualisation solutions, I know that 1 Gbit/s can be reached easily. Virtualbox themselves now offer a KVM-based version...
If it is only being used for evaluation, then fine.
Quote from: meyergru on March 08, 2025, 06:51:33 PMAny particular reason why you use Virtualbox?
[...]
If it is only being used for evaluation, then fine.
Sorry, I didn't express myself clearly in the first post. My Opnsense runs bare metal :
- Board: Supermicro A2SDi-4C-HLN4F
- Memory: 8GB
- Storage: 120GB SSD
Virtualbox was just the environment to test the migration:
- Checking SSD backup for recoverability, in case something goes wrong
- Installation together with configuration restoration
During the test installation in Virtualbox, it turned out that the configuration import does not work properly if I place the configuration on an additional partition of the installation media. As a result, I was able to adapt the installation procedure to reduce the downtime to a minimum.
I read the last post as installation on a Supermicro A2SDi board without a hypervisor.
I know the board can easily achieve gigabit speeds when routing.
So the question is: which services are you running apart from routing, pf and possibly NAT?
Also keep in mind that 1 Gbit/s is 100 MB/s net TCP throughput - so 80 is not that far off. If you meant that capital "B" to mean "Bytes" that is. And it might be due to e.g. IPS that you do not reach 100.
Quote from: Patrick M. Hausen on March 08, 2025, 08:19:58 PMI read the last post as installation on a Supermicro A2SDi board without a hypervisor.
I know the board can easily achieve gigabit speeds when routing.
Yes, it does. Running a Linux live system with IP routing, maximum throughput around 110MB/s is reached.
Quote from: Patrick M. Hausen on March 08, 2025, 08:19:58 PMSo the question is: which services are you running apart from routing, pf and possibly NAT?
[...]
My Opnsense is running mostly the standard services, extended with
- Nut
- Squid Forward Proxy (not involved in performance degradation between client and server)
- UDP Broadcast Relay
Shutting down non-essential services and kernel modules increases performance, but does not bring back maximum throughput. It looks like the problem is still the known old bug (see here (https://forum.opnsense.org/index.php?topic=37029.0)).
However, there is one difference between the old installation (v.24.7.12-2) and the new one (v.25.1.2): when deleting all entries in the SPD (IPsec) and shutting down the Netflow aggregator, the maximum throughput came back to about 100MB/s. In the new installation, the throughput only increases from 50MB/s to about 70-75MB/s.
When I boot the Opnsense live system (v.25.1.2), do the minimal network interface configuration (server: native ethernet interface ix3; client: VLAN on ethernet interface ix2) and create a firewall rule to allow SMB connections from the client to the server, the throughput is about 110MB/s. As soon as I create an additional IPsec rule, the throughput drops to about 80MB/s.
I still don't know how to figure this out.
Services like NUT that do not get in the packet forwarding path like IDS/IPS cannot have an impact on forwarding performance, IMHO. So we are back to a mystery.
I got Gigabit throughput on that same board with OPNsense virtualised in bhyve and two network interfaces passed through. Couple of months ago was last I checked.
Quote from: Patrick M. Hausen on March 09, 2025, 10:44:14 AM[...]
So we are back to a mystery.
I got Gigabit throughput on that same board with OPNsense virtualised in bhyve and two network interfaces passed through. Couple of months ago was last I checked.
I have no idea what else I can do. I can test the following scenarios with reference to the thread mentioned (link (https://forum.opnsense.org/index.php?topic=37029.0)). After that, I'll probably have to contact the FreeBSD community.
- Create an SPD entry for IPv6 instead of IPv4 and measure the throughput on the LAN
- Upgrade the server to IPv6 and compare the data throughput between IPv4 and IPv6
What I don't understand, however, is why others don't seem to have these problems. The board seems to be used frequently.