DEC4280

Started by DMartinsson, January 05, 2026, 11:22:58 AM

Previous topic - Next topic
Hello all

We have implemented a few DEC4280 (currently on 25.10.1_2) in our company. After some application performance issues we checked with iperf3 what the firewalls can throughput and we are far from the specs:

Firewall Throughput 60Gpbs
Firewall Packets Per Second 5000Kpps
Firewall Port to Port Throughput 21Gpbs

Our internal DEC4280, which sits between the server and client vlans, are connected with 2x25Gbit lagg0 to the primary server vlan and 2x25gbit lagg1 to about 20 client and developer vlans.

iperf3 results between 2 servers in the server vlan are normal with 23-24gbit as all our hardware servers are also connected with 2x25gbit but between the vlans where the DEC4280 is we get:

iperf3 -c x.x.x.x -p 1234 -P1 3-4 Gbit
iperf3 -c x.x.x.x -p 1234 -P4 8-13 Gbit
iperf3 -c x.x.x.x -p 1234 -P8 10-22 Gbit

And you can see on the opnsense with top -CHIPS that you have 1 or 4 or 8 cores at 99% with iperf3 -P1, -P4, -P8.

And the iperf3 -P4 and -P8 runs vary. Sometimes -P8 has 10 Gbit, sometimes it has 16, sometimes 22. It varies between runs and not in a run.
You can see with top -CHIPS that the number of CPUs at 99% correlates with it. The higher it gets the more CPUs are at 99%.

The low numbers at single stream are also visible at file copy over cifs between windows clients and windows server. Here we get 250-300 MByte/s which would corespondent to the 3-4 Gbit above.


I have already tried rss enable or disable.
net.isr.maxthreads=-1
net.isr.bindthreads = 1
net.isr.dispatch = deferred
net.inet.rss.enabled = 1
net.inet.rss.bits = 4
enable or disable does not make a difference at single stream. At multi streams the cpus are more even utilized.

Power Savings are set to maximum, that made the difference that the iperf3 tests dont begin at low numbers and increase.

hw.ibrs_disable = 1 made light difference
ice_ddp_load = yes made a hugh difference, bevor we had 1 gbit in single stream.


I find 3-4 Gbit for a single stream to low for this big box with 25gbit connectivity and the spec "Firewall Port to Port Throughput 21Gpbs"

Any suggestions?
Has anyone with a DEC4280 also run iperf3 tests?

br
Daniel

Did you upgrade to the latest available BIOS?

https://forum.opnsense.org/index.php?topic=48449.0
Hardware:
DEC740

Quote from: Monviech (Cedrik) on January 05, 2026, 12:58:47 PMDid you upgrade to the latest available BIOS?

https://forum.opnsense.org/index.php?topic=48449.0

A few months ago. It made a difference that multi stream get over 8gbit.

I'm not surprised that the quoted performance is a bit hard to achieve. The device is a throughput device - did you try it with 12 or 16 threads?

Quote from: pfry on January 07, 2026, 01:17:28 AMI'm not surprised that the quoted performance is a bit hard to achieve. The device is a throughput device - did you try it with 12 or 16 threads?


Yes, sometimes it gets 14, sometimes 23 gbit.

But the point is the performance for single threads which results in only 3-4gbit when one client speaks to one server.

Nobody with this hardware who can test and compare the iperf3 performance through the firewall?

br
Daniel

Hi Daniel,
we're seeing the exact same issue on our DEC3862 (EPYC 3201, ax driver, 10G SFP+). Single-stream tops out at 3–4 Gbit/s (or even less) through the firewall, multi-stream varies wildly. The appliance was purchased for 10G inter-VLAN routing and right now is clearly not delivering.
We spent quite some time isolating the problem the last couple of weeks and found something important: it seems to be a firmware regression. On our second DEC3862 (vanilla install, no plugins, no custom config, no tunables, just one single vlan) we benchmarked with iperf3 before and after updating:

25.10: 9.45 Gbit/s receive, 6.56 Gbit/s send (essentially line speed)
25.10.2: 3.35 Gbit/s receive, 1.77 Gbit/s send
Rollback to 25.10 via ZFS snapshot: line speed restored immediately

We also tested OPNsense on completely different hardware (i9-9900K with Intel X520-DA2, ix driver). Full line speed right from the start, no issues. So it seems to be specific to the Deciso appliances and the firmware versions after 25.10.

What's even more alarming is that a UDP test on our production DEC3862 (25.10.2) showed 48% packet loss at 10G. The appliance is silently dropping nearly half of all packets under load. With UDP some packet loss is expectable but nearly 50% is unusual. With TCP you don't notice it as clearly because retransmits mask the drops, but the underlying loss rate is severe.

Since you're on 25.10.1_2 and seeing the same symptoms on a DEC4280 with ice driver (I guess?) this confirms it's not a NIC-driver-specific problem but rather something in the FreeBSD kernel or pf packet path that was introduced after 25.10 (for example from what I can see from the changelogs, the way how checksums are handled has changed).

We're opening a support ticket with Deciso including all our benchmark data. Happy to share our findings in more detail if that helps. If you have a ZFS snapshot from before the update, it might be worth testing a rollback on your side as well to confirm.

Best regards,
Robin