1
Virtual private networks / os-wireguard is 2x slower thatn os-wireguard-go
« on: December 06, 2023, 04:32:03 pm »
Three day already trying to solve a problem.
I have a two sites with OpnSenses on bare metal (fully identical, Intel(R) Xeon(R) CPU D-1518 @ 2.20GHz (4 cores, 8 threads))
Sites connected with wireguard, on kmod implementation (os-wireguard), channel width between sites 1Gbps.
Testing bandwidth between sites on os-wireguard:
last pid: 13225; load averages: 1.92, 0.95, 0.66 up 0+08:06:23 18:25:46
435 threads: 11 running, 401 sleeping, 23 waiting
CPU 0: 0.0% user, 0.0% nice, 77.9% system, 0.0% interrupt, 22.1% idle
CPU 1: 0.0% user, 0.0% nice, 69.0% system, 3.9% interrupt, 27.1% idle
CPU 2: 0.4% user, 0.0% nice, 70.5% system, 5.4% interrupt, 23.6% idle
CPU 3: 0.8% user, 0.0% nice, 70.9% system, 3.9% interrupt, 24.4% idle
CPU 4: 0.0% user, 0.0% nice, 69.8% system, 7.8% interrupt, 22.5% idle
CPU 5: 0.0% user, 0.0% nice, 70.5% system, 7.8% interrupt, 21.7% idle
CPU 6: 0.0% user, 0.0% nice, 73.6% system, 0.0% interrupt, 26.4% idle
CPU 7: 0.4% user, 0.0% nice, 76.7% system, 0.0% interrupt, 22.9% idle
Mem: 107M Active, 293M Inact, 926M Wired, 40K Buf, 30G Free
ARC: 323M Total, 72M MFU, 183M MRU, 164K Anon, 5624K Header, 62M Other
180M Compressed, 447M Uncompressed, 2.49:1 Ratio
Swap: 8192M Total, 8192M Free
PID USERNAME PRI NICE SIZE RES STATE C TIME CPU COMMAND
0 root -76 - 0B 2176K - 0 4:42 73.87% kernel{wg_tqg_0}
0 root -76 - 0B 2176K - 7 5:13 70.79% kernel{wg_tqg_7}
0 root -76 - 0B 2176K CPU5 5 4:42 67.78% kernel{wg_tqg_5}
0 root -76 - 0B 2176K - 1 4:42 67.63% kernel{wg_tqg_1}
0 root -76 - 0B 2176K - 6 4:40 67.16% kernel{wg_tqg_6}
0 root -76 - 0B 2176K - 4 4:38 66.85% kernel{wg_tqg_4}
0 root -76 - 0B 2176K - 3 4:40 66.61% kernel{wg_tqg_3}
0 root -76 - 0B 2176K - 2 4:34 64.67% kernel{wg_tqg_2}
12 root -72 - 0B 384K RUN 5 1:04 30.90% intr{swi1: netisr 0}
11 root 155 ki31 0B 128K RUN 6 476:24 30.06% idle{idle: cpu6}
11 root 155 ki31 0B 128K RUN 1 476:45 27.30% idle{idle: cpu1}
11 root 155 ki31 0B 128K RUN 7 476:17 27.22% idle{idle: cpu7}
11 root 155 ki31 0B 128K RUN 3 476:25 26.75% idle{idle: cpu3}
11 root 155 ki31 0B 128K RUN 2 476:16 25.15% idle{idle: cpu2}
Huge CPU load on both sides, and bandwidth capped on ~500 Mbits:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 1.78 GBytes 509 Mbits/sec 38 sender
[ 5] 0.00-30.02 sec 1.78 GBytes 508 Mbits/sec receiver
On wireguard-go I have 800-850 Mbits/s and 20-30% cpu usage. What am I doing wrong?
I have a two sites with OpnSenses on bare metal (fully identical, Intel(R) Xeon(R) CPU D-1518 @ 2.20GHz (4 cores, 8 threads))
Sites connected with wireguard, on kmod implementation (os-wireguard), channel width between sites 1Gbps.
Testing bandwidth between sites on os-wireguard:
last pid: 13225; load averages: 1.92, 0.95, 0.66 up 0+08:06:23 18:25:46
435 threads: 11 running, 401 sleeping, 23 waiting
CPU 0: 0.0% user, 0.0% nice, 77.9% system, 0.0% interrupt, 22.1% idle
CPU 1: 0.0% user, 0.0% nice, 69.0% system, 3.9% interrupt, 27.1% idle
CPU 2: 0.4% user, 0.0% nice, 70.5% system, 5.4% interrupt, 23.6% idle
CPU 3: 0.8% user, 0.0% nice, 70.9% system, 3.9% interrupt, 24.4% idle
CPU 4: 0.0% user, 0.0% nice, 69.8% system, 7.8% interrupt, 22.5% idle
CPU 5: 0.0% user, 0.0% nice, 70.5% system, 7.8% interrupt, 21.7% idle
CPU 6: 0.0% user, 0.0% nice, 73.6% system, 0.0% interrupt, 26.4% idle
CPU 7: 0.4% user, 0.0% nice, 76.7% system, 0.0% interrupt, 22.9% idle
Mem: 107M Active, 293M Inact, 926M Wired, 40K Buf, 30G Free
ARC: 323M Total, 72M MFU, 183M MRU, 164K Anon, 5624K Header, 62M Other
180M Compressed, 447M Uncompressed, 2.49:1 Ratio
Swap: 8192M Total, 8192M Free
PID USERNAME PRI NICE SIZE RES STATE C TIME CPU COMMAND
0 root -76 - 0B 2176K - 0 4:42 73.87% kernel{wg_tqg_0}
0 root -76 - 0B 2176K - 7 5:13 70.79% kernel{wg_tqg_7}
0 root -76 - 0B 2176K CPU5 5 4:42 67.78% kernel{wg_tqg_5}
0 root -76 - 0B 2176K - 1 4:42 67.63% kernel{wg_tqg_1}
0 root -76 - 0B 2176K - 6 4:40 67.16% kernel{wg_tqg_6}
0 root -76 - 0B 2176K - 4 4:38 66.85% kernel{wg_tqg_4}
0 root -76 - 0B 2176K - 3 4:40 66.61% kernel{wg_tqg_3}
0 root -76 - 0B 2176K - 2 4:34 64.67% kernel{wg_tqg_2}
12 root -72 - 0B 384K RUN 5 1:04 30.90% intr{swi1: netisr 0}
11 root 155 ki31 0B 128K RUN 6 476:24 30.06% idle{idle: cpu6}
11 root 155 ki31 0B 128K RUN 1 476:45 27.30% idle{idle: cpu1}
11 root 155 ki31 0B 128K RUN 7 476:17 27.22% idle{idle: cpu7}
11 root 155 ki31 0B 128K RUN 3 476:25 26.75% idle{idle: cpu3}
11 root 155 ki31 0B 128K RUN 2 476:16 25.15% idle{idle: cpu2}
Huge CPU load on both sides, and bandwidth capped on ~500 Mbits:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 1.78 GBytes 509 Mbits/sec 38 sender
[ 5] 0.00-30.02 sec 1.78 GBytes 508 Mbits/sec receiver
On wireguard-go I have 800-850 Mbits/s and 20-30% cpu usage. What am I doing wrong?