C:\Users\Admin\Downloads\iperf-3.1.3-win64\iperf-3.1.3-win64>iperf3 -c 172.30.30.1 -p 34102Connecting to host 172.30.30.1, port 34102[ 4] local 172.30.30.40 port 51088 connected to 172.30.30.1 port 34102[ ID] Interval Transfer Bandwidth[ 4] 0.00-1.00 sec 64.2 MBytes 539 Mbits/sec[ 4] 1.00-2.00 sec 67.4 MBytes 565 Mbits/sec[ 4] 2.00-3.00 sec 59.8 MBytes 501 Mbits/sec[ 4] 3.00-4.00 sec 58.6 MBytes 491 Mbits/sec[ 4] 4.00-5.00 sec 67.5 MBytes 567 Mbits/sec[ 4] 5.00-6.00 sec 68.1 MBytes 571 Mbits/sec[ 4] 6.00-7.00 sec 68.1 MBytes 572 Mbits/sec[ 4] 7.00-8.00 sec 58.5 MBytes 491 Mbits/sec[ 4] 8.00-9.00 sec 68.0 MBytes 570 Mbits/sec[ 4] 9.00-10.00 sec 60.2 MBytes 506 Mbits/sec- - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bandwidth[ 4] 0.00-10.00 sec 640 MBytes 537 Mbits/sec sender[ 4] 0.00-10.00 sec 640 MBytes 537 Mbits/sec receiveriperf Done.
iperf3 -R -c 172.30.30.1 -p 34102
PCI is very bandwidth constrained compared to PCIe - and with 4 gbe nics on PCI you're almost certainly running into those constraints
PCI 32-bit, 33 MHz: 1067 Mbit/s or 133 MB/sPCI 32-bit, 66 MHz: 2128 Mbit/s or 266 MB/sPCI 64-bit, 33 MHz: 2128 Mbit/s or 266 MB/sPCI 64-bit, 66 MHz: 4264 Mbit/s or 533 MB/sWhat Gig-E (1000base-X) should be: 1000 Mbit/s or 125 MB/sWhat I'm getting: 537 Mbits/s or 67 MB/s
found a block diagram for a 'typical' h77/z77 board - the 2 pci slots are off a pci/pcie bridge chip and the upstream link is pcie3x1 - that should be heaps for 4 x gbe links ( presuming the bridge chip isn't utter rubbish )I'd check core loading on the firewall when running a speed test from a lan client to wan and see if you're saturating a single core ( ssh in to opnsense, run 'top -P' and watch the 'interrupt and 'idle' column ). I dug up the pro/1000 mt dual specs pdf - no mention of rss on it at all that I can see, freedBSD used to have a separate 'emx' driver with rss support for some of those older cards but it doesn't list the 82546, just newer variants ( https://man.dragonflybsd.org/?command=emx§ion=4 ). Suspect you need a new NIC
1 PCI Express 3.0/2.0 x16 slot (pcie 3 speed is only supported by intel 3rd gen core processors)1 PCI Express 2.0 x4 slot2 PCI slots (dont know if these are 32 or 64 bit, and if they are 33 or 66 mhz)
em0: <Intel(R) Legacy PRO/1000 MT 82546EB (Copper)> port 0xc0c0-0xc0ff mem 0xf7c60000-0xf7c7ffff irq 19 at device 0.0 on pci7em1: <Intel(R) Legacy PRO/1000 MT 82546EB (Copper)> port 0xc080-0xc0bf mem 0xf7c40000-0xf7c5ffff irq 16 at device 0.1 on pci7em2: <Intel(R) Legacy PRO/1000 MT 82546EB (Copper)> port 0xc040-0xc07f mem 0xf7c20000-0xf7c3ffff irq 16 at device 1.0 on pci7em3: <Intel(R) Legacy PRO/1000 MT 82546EB (Copper)> port 0xc000-0xc03f mem 0xf7c00000-0xf7c1ffff irq 17 at device 1.1 on pci7
Could you please reverse the iperf3 connection with '-R' so the server sends, I'm curious about the speed.Code: [Select]iperf3 -R -c 172.30.30.1 -p 34102
C:\Users\Admin\Downloads\iperf-3.1.3-win64\iperf-3.1.3-win64>iperf3 -R -c 172.30.30.1 -p 23175Connecting to host 172.30.30.1, port 23175Reverse mode, remote host 172.30.30.1 is sending[ 4] local 172.30.30.40 port 56999 connected to 172.30.30.1 port 23175[ ID] Interval Transfer Bandwidth[ 4] 0.00-1.00 sec 30.2 MBytes 254 Mbits/sec[ 4] 1.00-2.00 sec 30.1 MBytes 253 Mbits/sec[ 4] 2.00-3.00 sec 30.1 MBytes 253 Mbits/sec[ 4] 3.00-4.00 sec 30.1 MBytes 253 Mbits/sec[ 4] 4.00-5.00 sec 30.1 MBytes 252 Mbits/sec[ 4] 5.00-6.00 sec 30.2 MBytes 254 Mbits/sec[ 4] 6.00-7.00 sec 30.1 MBytes 253 Mbits/sec[ 4] 7.00-8.00 sec 30.1 MBytes 252 Mbits/sec[ 4] 8.00-9.00 sec 30.1 MBytes 252 Mbits/sec[ 4] 9.00-10.00 sec 30.9 MBytes 259 Mbits/sec- - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bandwidth Retr[ 4] 0.00-10.00 sec 302 MBytes 254 Mbits/sec 0 sender[ 4] 0.00-10.00 sec 302 MBytes 254 Mbits/sec receiveriperf Done.