100Mb/s in WAN instead of ~250 MB\s

Started by srepper, July 31, 2023, 11:51:57 PM

Previous topic - Next topic
July 31, 2023, 11:51:57 PM Last Edit: July 31, 2023, 11:53:41 PM by srepper
Hey community,

I am new in the opnsense area.

If I do a speedtest it is all the time ca. 100 MB/s. If I connect it with my alternative router it comes the full bandwith.





Setup:

ISP / Modem ( Speedport SMART ) -----> OPNsense ( N5105 ) -----> AP ( Asus DSL 68U )


OPNsense 23.7-amd64
FreeBSD 13.2-RELEASE-p1
OpenSSL 1.1.1u 30 May 2023

Intel(R) Celeron(R) N5105 @ 2.00GHz (4 cores, 4 threads)

downlink: 104.700
uplink: 36,213
ping: 11 ms



What I did:

-- exchange the cable ( different CAT 7 cables )
-- add Parent WAN
-- disable / enable unbound/adguard
-- change the parameters for optimizing from teklager and disabled flowcontrol ( see below )


does someone has a hint for me?


my loader.conf :


root@OPNsense:/var/db/etcupdate/current/etc # cat /boot/loader.conf
##############################################################
# This file was auto-generated using the rc.loader facility. #
# In order to deploy a custom change to this installation,   #
# please use /boot/loader.conf.local as it is not rewritten, #
# or better yet use System: Settings: Tunables from the GUI. #
##############################################################

loader_brand="opnsense"
loader_logo="hourglass"
loader_menu_title=""

autoboot_delay="3"

# Vital modules that are not in FreeBSD's GENERIC
# configuration will be loaded on boot, which makes
# races with individual module's settings impossible.
carp_load="YES"
if_bridge_load="YES"
if_enc_load="YES"
if_gif_load="YES"
if_gre_load="YES"
if_lagg_load="YES"
if_tap_load="YES"
if_tun_load="YES"
if_vlan_load="YES"
pf_load="YES"
pflog_load="YES"
pfsync_load="YES"

# ZFS standard environment requirements
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
vfs.zfs.min_auto_ashift=12
opensolaris_load="YES"
zfs_load="YES"

# dynamically generated console settings follow
#comconsole_speed
#boot_multicons
#boot_serial
#kern.vty
console="vidconsole"

# dynamically generated tunables settings follow
dev.igc.0.fc="0"
dev.igc.1.fc="0"
dev.igc.2.fc="0"
dev.igc.3.fc="0"
hw.ibrs_disable="0"
hw.igc.rx_process_limit="-1"
hw.igc.tx_process_limit="-1"
hw.ixl.enable_head_writeback="0"
hw.syscons.kbd_reboot="0"
kern.ipc.maxsockbuf="4262144"
kern.ipc.nmbclusters="1000000"
kern.randompid="1"
legal.intel_igc.license_ack="-1"
net.enc.in.ipsec_bpf_mask="2"
net.enc.in.ipsec_filter_mask="2"
net.enc.out.ipsec_bpf_mask="1"
net.enc.out.ipsec_filter_mask="1"
net.inet.icmp.drop_redirect="1"
net.inet.icmp.icmplim="0"
net.inet.icmp.log_redirect="0"
net.inet.icmp.reply_from_interface="1"
net.inet.ip.accept_sourceroute="0"
net.inet.ip.forwarding="1"
net.inet.ip.intr_queue_maxlen="1000"
net.inet.ip.portrange.first="1024"
net.inet.ip.random_id="1"
net.inet.ip.redirect="0"
net.inet.ip.sourceroute="0"
net.inet.tcp.blackhole="2"
net.inet.tcp.delayed_ack="0"
net.inet.tcp.drop_synfin="1"
net.inet.tcp.log_debug="0"
net.inet.tcp.recvspace="65228"
net.inet.tcp.sendspace="65228"
net.inet.tcp.syncookies="1"
net.inet.tcp.tso="1"
net.inet.udp.blackhole="1"
net.inet.udp.checksum="1"
net.inet.udp.maxdgram="57344"
net.inet6.ip6.forwarding="1"
net.inet6.ip6.intr_queue_maxlen="1000"
net.inet6.ip6.prefer_tempaddr="0"
net.inet6.ip6.redirect="0"
net.inet6.ip6.use_tempaddr="0"
net.link.bridge.pfil_bridge="1"
net.link.bridge.pfil_local_phys="0"
net.link.bridge.pfil_member="0"
net.link.bridge.pfil_onlyip="0"
net.link.ether.inet.log_arp_movements="1"
net.link.ether.inet.log_arp_wrong_iface="1"
net.link.tap.user_open="1"
net.link.vlan.mtag_pcp="1"
net.local.dgram.maxdgram="8192"
net.pf.share_forward="1"
net.pf.share_forward6="1"
net.route.multipath="0"
security.bsd.see_other_gids="0"
security.bsd.see_other_uids="0"
vfs.read_max="32"
vm.pmap.pti="1"

I think there's something wrong with 23.7.  I'm not getting the same peak speed I used to get, and I'm burning a lot more CPU.  It looks like ISRs are doing something they didn't do before.


August 04, 2023, 02:21:50 PM #2 Last Edit: August 11, 2023, 10:40:47 AM by Seimus
Quote from: srepper on July 31, 2023, 11:51:57 PM

Hey community,

I am new in the opnsense area.

If I do a speedtest it is all the time ca. 100 MB/s. If I connect it with my alternative router it comes the full bandwith.





Setup:

ISP / Modem ( Speedport SMART ) -----> OPNsense ( N5105 ) -----> AP ( Asus DSL 68U )


OPNsense 23.7-amd64
FreeBSD 13.2-RELEASE-p1
OpenSSL 1.1.1u 30 May 2023

Intel(R) Celeron(R) N5105 @ 2.00GHz (4 cores, 4 threads)

downlink: 104.700
uplink: 36,213
ping: 11 ms



What I did:

-- exchange the cable ( different CAT 7 cables )
-- add Parent WAN
-- disable / enable unbound/adguard
-- change the parameters for optimizing from teklager and disabled flowcontrol ( see below )

Maybe a silly question but at which speed are your interfaces negotiated?


Quote from: JustMeHere on August 03, 2023, 07:15:09 PM
I think there's something wrong with 23.7.  I'm not getting the same peak speed I used to get, and I'm burning a lot more CPU.  It looks like ISRs are doing something they didn't do before.



This looks really weird the IRQs increased, and I believe you have same configuration setup as per per upgrade?


I have as well an N5105 CPU didn't yet upgrade. Its on schedule next week so will see what will be the outcome.

Regards,
S.
Networking is love. You may hate it, but in the end, you always come back to it.

OPNSense HW
APU2D2 - deceased
N5105 - i226-V | Patriot 2x8G 3200 DDR4 | L 790 512G - VM HA(SOON)
N100   - i226-V | Crucial 16G  4800 DDR5 | S 980 500G - PROD

Just registered to share my experience.

I too ran into the same issue after upgrading to 23.7 on my j6314 appliance. My DL speeds decreased significantly. I was convinced I had an ISP issue originally.

What caught my attention was my cronjob to run the speedtest every morning was no longer working as I have a specific server ID set. I went into the GUI and noticed my list of available servers was not normal and showing servers hundreds of miles away from me.

I then went into the Speedtest reporting tab and ticked the Help toggle. This brings up a few options above the Log Section. I noticed Speedtest was using HTTP, I switched this over to "switch to socket speedtest" and noticed my local servers reappeared and ran a test. All normal again!

My guess is that when the plugin was reinstalled after the 23.7 upgrade, speedtest defaulted to HTTP tests.

Hopefully this solves the issue for you!