Upgraded a protectli VP2410 to a VP2420 and cross interface performance hit.

Started by dmax68, October 14, 2025, 11:03:43 PM

Previous topic - Next topic
Hey all, first time on the forum but not a first time user. been on opnsense since it forked from pfsense.
got an issue that has me stumped. I've been running a Protectli VP2410 for the last 3 years and it has been rock solid stable. Not running Suricata or IDP/IPS or anything special. Just a dual wan, a LAN, VLAN and a DMZ (Nextcloud) with some rules and DHCP on the LAN/VLAN. Both of my ISP's announced that 2g fiber was coming soon so I splurged and picked up a VP2420 to get the 2.5g interfaces. I did not restore my config from the old unit. I just documented my settings and entered them into a fresh opnsense install on the new unit. Everything works great except traffic to/from the DMZ. Haven't tried the VLAN as that one is wireless so I expect a perfomance hit
1. from the public internet, uploads/downloads to the nextcloud instance in the DMZ (192.168.11.23) runs between 1 and 2 mbps (on the old device it easily hit 700+mbps)(I know this as this is how I moved ISO's from home to work)
2. from my workstation (192.168.10.167) to the DMZ machine (192.168.11.23) iperf is showing 1.47 Mbits/sec. the same test to a file server (192.168.10.20) in the same lan segment as my workstation is 826 Mbits/sec.
3. From a test machine I spun up in the DMZ (192.168.11.20) to the target machine (192.168.11.23) I am seeing 9.55Gbit/sec (same hypervisor host)
I just today stood up the iperf tests but last week I put the old VP2410 back in play for a day and all the performance issues went away.
What am I missing?

All of your cases where the speeds are slow seem to involve the hypervisor (your DMZ host).  I saw something similar on a Linux machine that is acting as a VM host and which connects to a switch port that is a trunk.  I had to try different combinations of software bridge + VLAN interface setups on the linux host to resolve the abysmally slow transfers. I was only getting several hundred kbps at one point.  We have the i226-v NIC in common; could be an issue with that.

I would eliminate the DMZ host from testing to try and isolate that as the source of the problem.  Since you already confirmed the transfer between physical hosts on the same VLAN is OK, the next thing to try is two physical hosts on different VLANs.  If inter-VLAN routing is OK then I think you can at least be sure the router is functioning OK.  Then focus back on the hypervisor setup.


Is the OPNSense itself virtualised? If yes are you using VirtIO (vtnet) interfaces?
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

You said you did not carry over the configuration. I would try to do that and just reassign the interfaces. That way, you would be sure that there is no setting that you once had and now forgot.
Intel N100, 4* I226-V, 2* 82559, 16 GByte, 500 GByte NVME, ZTE F6005

1100 down / 800 up, Bufferbloat A+

Apologies for the late reply, life got in the way.
To add a few details:
1. Opnsense is not virtualized. It is running on a new Protectli VP2420 device (a physical upgrade/replacement for the previous VP2410 device).
2. The DMZ host that is currently in play with the nextcloud VM is the same host/VM that was in play with the old VP2410 opnsense.
3. Over the weekend, I took the new VP2420 out of the mix and put the old VP2410 in and everything ran like it did before. no issues with DMZ performance.
4. With the exception of my workstation, all of the source/target iperf machines are VM's running on vsphere 8. not sure how relevant the hypervisor is since I get the same performance regardless if it is my work station or a VM
With that, I would assume the issue lies within the VP2420, either the device itself or the opnsense install/config.
I think my next plan of action is to:
A. set up another host in the DMZ and try that. I doubt it will make a difference due to item 3 above.
B. Completely rebuild the VP2420 but swap the LAN<>DMZ interfaces. if the problem follows the DMZ, then it would be a config problem. if it stays with the interface, then it would point to a hardware issue. If the problem goes away, then I fudged a config somewhere (and I am going to go N.V.T.S nuts wishing I understood what happened...)

Thoughts? any other ideas I can try?
Thanks for the assist. You are appreciated.

2420 uses i225's. (yikes).
1) verify what nvram those 225's have
2) verify what link speed is reported on both ends if possible, any logs indicating link issues or low link speed?


And then I ask, why 2420? For the price why not a device that has N150 and i226's ?
Mini-pc N150 i226v x520, FREEDOM

You should (may?) be able to ID the i225 revision via PCI ID:

(stolen from chiphell)

Intel(R) Ethernet Controller I225-LM 8086 15F2 B1 SLN9B/SLN9A
Intel(R) Ethernet Controller I225-V 8086 15F3 B1 SLN9D/SLN9C

Intel(R) Ethernet Controller (2) I225-LM 8086 15F2 B2 SLNJW/SLNJV
Intel(R) Ethernet Controller (2) I225-V 8086 15F3 B2 SLNJY/SLNJX

Intel(R) Ethernet Controller (3) I225-LM 8086 15F2 B3 SLNNJ/SLNNH
Intel(R) Ethernet Controller (3) I225-V 8086 15F3 B3 SLNMH/SLMNG
Intel(R) Ethernet Controller (3) I225-IT 8086 0D9F B3 SLNNL/SLNNK
Intel(R) Ethernet Controller (3) I225-LMvP 8086 5502 B3 SLNNJ/SLNNH

You want a B3. If you're really bored and have a good magnifier you can look at the device. The chip cap is marked with the "SL" ID. The cap is ~7mm, so the lettering is <1mm.

I'll guess and say its a 225V(3) nic, but could there be a (4). I think 225 stopped at (3).
Intel appears to have just released updated driver, which won't help with OPNsense because we can't unload the static drivers in the kernel module.
This issue is really a shout-out to OPNsense team, "stop using GENERIC" when compiling freeBSD, make nic drivers as KLM's instead.

10/20/2025
https://www.intel.com/content/www/us/en/products/sku/184676/intel-ethernet-controller-i225v/downloads.html
Mini-pc N150 i226v x520, FREEDOM

Couple notes and a surprise.
I neglected to mention that I checked the negotiation a while back. the host was wired directly to the interface then wired to a separate switch (it is back to direct wired). In all cases the host and interface had always auto neg 1g FD and at some point I nailed the config to 1g FD on both devices.
I am not sure I remember seeing the 2430 back when I ordered it in June, otherwise I probably would have.
I swapped the DMZ interface for one of the WAN interfaces and the problem followed the swap so it is not a defective port issue.
Now here is the surprise. it is a VP2420 running the J6412 but it is also running the I226-V (unless opnsense is lying to me)
I think my next step is to restore the old 2410 config and see what it does. will have to wait until this weekend.
You cannot view this attachment.

I think that's correct.  The VP2420 initially came with I225-V NICS, and this is what is still documented in the knowledge base, but at some point it seems they started shipping with I226-V instead.  The current specs on the product listing page confirm it (screen attached).

FWIW, I'm currently running a V1410 also with I226-V NICs and don't see this problem.  As I mentioned earlier though I did see a very similar problem with a specific linux bridge setup, so that's why I was suspicious of the VM host.

Turn off the ACPI stuff, and make sure to update i226 nvram.
Mini-pc N150 i226v x520, FREEDOM