Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - relume

#1
Thanks again!

Meanwhile, I stepped in a bit deeper. I queried ChatGPT and got a very detailed technical elaboration. And at the same time I found also the following guide: "OPNsense Performance Tuning for Multi-Gigabit Internet". Most arguments in this guide are similar to those in the elaboration from ChatGPT. Additionally, ChatGPT also pointed out the aspect of hugepages on the proxmox host, but also on the VMs configuration, but I was unable to set it in OPNsense with hw.pagesizes = 1073741824 neither as OPNsense tunables in the GUI nor at the /boat/loader.conf or loader.conf.local level. However, with those tunings from ChatGPT and particularly from the guide, I was able to raise overall network speed by a factor of 3 to 4 (depending on how many threads are tested by iperf3 — not across the OPNSense instance) from as little as 0.9 to 1.2 Gbps to 2.5  to 4.2 Gbps (also with ookla speedtest for Internet connection). Sure, this corresponds not nearly to the available 10Gbps (internal and external), but this is for the moment better than the previous situation. Although it remains somewhat disappointing (considering the time and effort to get the system(s) tuned).

best

P.S. In OPNsense hw.vtnet.csum_disable: 1 is set by default at boot-time on a global level. But strangely enough, only setting on every single interface "Overwrite global settings:yes > Hardware CRC:Disable hardware checksum offload:yes) gives a speed increase of 10%.
#2
Hello

Many thanks for your advice. I read your HOWTO about virtualization (proxmox) before.

Meanwhile, I did some other tests and configuration changes:

  • changed on OPNSense on interface to "disable HW VLAN offload" and "disable HW CRC offload" :
    > resulting in a slightly better speed. "disable HW TOS" and "disable HW LOR" worsened the speed (from 1.1Gbs to 0.2Gbs.
  • changed on the OPNSense VM guest hardware nic queues to the number of cores (8) :
    > resulting in the same speed limitation.
  • changed on the OPNSense VM guest hardware nic drive from vtnet (virtio) to vmxnet3:
    > resulting to in speed increase of factor 4 (from 1.1Gbs to ca. 5Gbs). Unfortunately "disabling HW offload" settings worked different on vmxnet3 than in virtio vtnet type interfaces and created some anomalies (speed increases and strong speed limitations on the same time) with other VMs with virtio interfaces (seems that vmxnet3 and virtio nics can not coexists on the same proxmox bridge?).
  • installed a fresh OPNSense VM (25.7.2) on a second identical proxmox node with the same VM characteristics (8 cores, 16 GB RAM, virtio nic with 8 queues) and tested with iperf3 targeting an Ubuntu VM on the same node:
    > resulting in the same speed limitation as on the original proxmox node. Disabling firewall functionality on OPNsense resulted only in a small (10%) speed increase.
  • as virtio vtnet drivers on FreeBSD are "said" to be not fully compatible or problematic (for proxmox environment), installed on the same proxmox node with a virtio nic on the same proxmox virtual bridge as the OPNsense installation an FreeBSD VM guest (14.3, June 2025):
    > resulting that with iperf3 on FreeBSD VM speed as a source or target to an other Ubunut VM on the same node was about 10Gps, as it is on the same proxmox node for other Ubuntu VMs.

So some general speed limitation is given by the old hardware host with proxmox installed on (OPNsense performs perfectly on a VMware ESXi 6.5 with Xeon Silver 4210 CPU). But OPNsense appears to be special on proxmox in behavior regarding the network speed, even when considering FreeBSD VM installations behave as other Linux VMs on the same node and on the same virtual bridge with 10Gbps. However, my experience seems to correlate with the issues reported on this topic (https://forum.opnsense.org/index.php?topic=45870).

#3
opnsense : 25.7.2
proxmox : 8.4.10
CPU : 24 Cores / Xeon(R) CPU L5640 (yes old ones)
RAM : 128GB

Hello

I am testing LAN network speed with iperf3 from and to the OPNSense 25.7 installation as a VM on proxmox. All involved devices in test are connected on the same proxmox virtual network bridge as VMs or over a 10Gbps physical switch. Say I have the following setting:

OPNSense VM on proxmox : br00 , 8 cores, 16 GB RAM
proxmox node host : br00
VM1 ubuntu on proxmox : br00
VM2 ubuntu on proxmox : br00
Dev1 ubuntu : br00 <-> 10Gbps  physical switch

  • Using ipref3 between proxmox node, VM1 and VM2 (in all combinations server/client) the speed is about 15-20Gbps. Obviously speed to the device Dev1 on the attached physical switch is about 9.8Gbps. Those speeds are ok.
  • Using ipref3 as a client on OPNSense towards the other VMs and/or proxmox host the speed is about 0.9 - 1.2Gbps only. The same is true if OPNSense is used as an iperf3 server. OPNSense CPU core load is during speed test max. 50% and RAM usage is also less than 25% (on the OPNSense dashboard). The proxmox total CPU load is less than 50% during the speed test.
The involved net interface of OPNSense VM attached to the proxmox virtual bridge br00 is defined of type VirtIO (like all interfaces of the other VMs) and shows up as 10Gbase-T full duplex; thus, that seems to be ok.

So my question is why the throughput/network speed of OPNSense VM is limited to about 0.9 - 1.2Gbps and this as well if it is used in the iperf3 test as an iperf3 client. Operating as iperf3 client towards the attached LAN under "normal" circumstances, the firewall performance should not be the source of speed limitation (?).

Many thanks in advance for advice.




#4
Hello

Started today with IDS on OPNSense 25.1.7_4 and selected "Hyperscan" as pattern matcher. Unfortunately, I got with "Hyperscan" the error "IDS log reports "hs" is an invalid mpm algo"  and it became apparent that "Hyperscan" requires SSSE3 and actually running OPNSense on Proxmox with qemu64 on (old) Xeon Westmere EP hardware, it is not possible to switch the cpu type to any else that qemu64 to start the OPNsense VM. Switching to "Aho-Corasick, Ken Steele" resulted then in the above error suricata "Error - Just ran out of space in the queue. Fatal Error."
#5
I agree with. Even on editing a single peer configuration, it would be very useful to have the possibility to export the modified configuration to a config file. Actually, we are modifying existing peer configuration in the already existing external config files.

Also, as a feature request (see separate post) it would be very useful, that in the field "Allowed IPs" firewall Aliases for network (groups) and host definitions could be used, that would be resolved appropriately, when exported to single or multiple peer config files
#6
It may happen, that in a WireGuard Road Warrior Setup, for the field "Allowed IPs" additional networks and/or hosts have to be added.

Actually, this only possible by entering in the field "Allowed IPs" network definition by the CIDR notation. For larger WireGuard Road Warrior peer setups, it would be very useful if it will be possible to use also "named" network definitions from the firewall aliases definition list.

Thus entering as an example firewall aliases as "10.242.10.10/32, net_internal, net_internal_dmz".

This would render the WireGuard peer setup much more manageable.
#7
Hello

here is my configuration that my help others to setup 1:1 / On-to-On NAT with single IPs for multipe public single IPs

here is basic network layout as mentionded above:

Network-Layout

WAN | xxx.yyy.zzz.240/29 public subnet,  xxx.yyy.zzz.241 router, xxx.yyy.zzz.242 OPNsense WAN
DMZ | 192.168.5.0/24
LAN | 192.168.1.0/24


1:1 IP mapping

WAN | xxx.yyy.zzz.244 -> DMZ | 192.168.5.10/24
WAN | xxx.yyy.zzz.245 -> DMZ | 192.168.5.11/24


OPNsense Configuration | Interfaces:

interface WAN | IP xxx.yyy.zzz.242/29, gateway autodetect
interface DMZ | IP 192.168.5.1/24, gateway autodetect
interface LAN | 192.168.1.1/24, gateway autodetect


OPNsense Configuration | Interfaces | Virtual IPs:

interfaces virtual IP | xxx.yyy.zzz.244/32, if: WAN, type: Proxy ARP
interfaces virtual IP | xxx.yyy.zzz.245/32, if: WAN, type: Proxy ARP


OPNsense Configuration | Firewall | One-to-One: (I found that aliases do not work )

firewall one-to-one | if: WAN, ex IP:  xxx.yyy.zzz.244/32, in IP - single Host/Network: 192.168.5.10/32, dest: any, type: binat, nat reflection: enable
firewall one-to-one | if: WAN, ex IP: xxx.yyy.zzz.245/32, in IP - single Host/Network: 192.168.5.11/32, dest: any, type: binat, nat reflection: enable


OPNsense Configuration | Firewall | Rules | WAN (Interface):

firewall rules wan | action: Pass, quick: enabled, if: WAN, direction: in, protocol: any, source: any, destination: any, gateway: default


or if some only specific port ranges should be 1:1 forwarded (again Aliases for the DMZ IP address seems not to work):


firewall rules WAN | action: Pass, quick: enabled, if: WAN, direction: in, protocol: TCP/UDP, source: any, destination - single host network: 192.168.5.10/32, destination port range: 443 (for https), gateway: default
firewall rules WAN | action: Pass, quick: enabled, if: WAN, direction: in, protocol: TCP/UDP, source: any, destination - single host network: 192.168.5.11/32, destination port range: 80 (for http or Alias with multiple ports), gateway: default


for any other protocol types together with a TCP/UDP port range additional rules have to added:


firewall rules WAN | action: Pass, quick: enabled, if: WAN, direction: in, protocol: ICMP, source: any, destination - single host network: 192.168.5.10/32, gateway: default


best regards
#8
Hello

Many thanks for your prompte response and hints. I am sorry to reply only today.

Now our 1:1 NAT on multiple sinlge public IPs configuration is working - in a basic configration (without special - service specific - blocking/allowing firewall rules) - is running.

I will post here in the next days our configuration (anonymised) in order somebody else needs a step to step guide.

best regards
#9
OpnSense : 23.7.1_3-amd64

Hello

We are migrating our Router/Firewall infrastructure from Sophos UTM 9.7 to OPNsense and I apologize to address the 1:1 NAT theme again although it is an topic with many entries in the forum. Unfortunately im stuck in the OPNSense configuration and was not able to get run the 1:1 NAT after hours of configuration tentatives and consulting forum entries  :-[.

Thus I would be very gratefull for any concise advice (step by step) how an 1:1 NAT single IPs configuration should be done.

Our network "layout" is the following:



WAN | xxx.yyy.zzz.240/29 public subnet,  xxx.yyy.zzz.241 router, xxx.yyy.zzz.242 OPNsense WAN
DMZ | 192.168.5.0/24
LAN | 192.168.1.0/24



on the DMZ subnet we had (sophos UTM 9.7) and want now on OPNsense to 1:1 NAT the following IPs:



WAN | xxx.yyy.zzz.244 -> DMZ | 192.168.5.10/24
WAN | xxx.yyy.zzz.245 -> DMZ | 192.168.5.11/24



therefore we have on OPNsense the following initial configuration:



interface WAN | IP xxx.yyy.zzz.242/29, gateway autodetect
interface DMZ | IP 192.168.5.1/24, gateway autodetect
interface LAN | 192.168.1.1/24, gateway autodetect

virtual IP | xxx.yyy.zzz.244/32, type alias
virtual IP | xxx.yyy.zzz.245/32, type alias

firewall one-to-one | if WAN, ex IP  xxx.yyy.zzz.244/32, in IP 192.168.5.10, dest any, type nat, nat reflection enable
firewall one-to-one | if WAN, ex IP  xxx.yyy.zzz.245/32, in IP 192.168.5.11, dest any, type nat, nat reflection enable

firewall outbound | manual rules
firewall outbound | IP4, any, any, LAN address (in order that LAN has Internet access)

firewall advanced settings | NAT 1:1 reflection enabled



We tried different manual firewall rules to enable/allow traffic to/from the 1:1 nated public addresses (xxx.yyy.zzz.244/32 and xxx.yyy.zzz.245/32) but where not able to access the servers behind those 1:1 nated addresses (neither from public nor from LAN side). Instead we are able to access there DMZ addresses 192.168.5.10 and 192.168.5.11 from LAN.

So it seems that we are missing an important part in the configuration that makes 1:1 to work for us. Therefore we would be very gratefull for any hint or exmaple configuration how to make 1:1 NAT with single IPs on a public subnet and DMZ to work.

Many thanks and best regards,

André
#10
Hello

We would like to switch our Internet-Connection infrastructure to OpenSense.

Our actual Internet-Connection infrastructure consists on a Router with a fixed public IP (fibre connection on VLAN 10) and routes to our (fixed) public IP-subnet. Our Firewall routes/maps this public subnet to our internal/private network and DMZ. Router and Firewall are both VMs.

We would like to ask the community, if it would be possible to collapse router and firewall to one single OpenSense configuration? If this would be possible, we would also like to ask, if such an approach is also reasonable in the sense of performance and security?

Many thanks in advance for any hint