Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - bugacha

#1
25.7 Series / Re: 25.7 Upgrade OK
July 23, 2025, 08:41:45 PM
Upgrade okay, Init7 25gbps on iperf3 with E810+ddp works good.
#2
Quote from: MoonbeamFrame on June 23, 2025, 12:29:21 PMXGS-PON is becoming available from my service providers and I have started to look for hardware to facilitate migration to these services.

While I can find hardware with combinations of 2.5Gbit/s copper with SPF+, I am yet to find much in the SOHO market with 10Gbit/s copper with SPF+.

Does anyone have any recommendations for hardware that will run OPNsense?

I run 25gbps internet on MS-A2 9955hx. In past used 14700, non-k version.

Important consideration is the network card that you will use. E810 or Connect4-x Lx both work great for me.

I get 25gbps routing performance out of OpenSense easily on 8 cores
#3
I want to say I might have same problem, but I'm not sure at this stage.

I do see same RSS errors in dmesg.

Recently my internet speed dropped from 25gbp/s to 8-9. I do run Intel E810-XXV dual port adapter on Opnsense 25.1.10
I used to get 25gbp/s throughput via Opnsense as verified via iperf3 tests (both ipv4 and ipv6) from LAN client to Internet iperf3 server.

What puzzles me, is that I easily get 25gbp/s in iperf3 by running it on LAN interface of OPnsense from a LAN client.
Also, when I look at top during the above test, I see all CPUs busy doing something :

last pid: 12095;  load averages:  0.95,  0.45,  0.28                                                                                                                                      up 0+01:48:02  21:13:50
83 processes:  1 running, 82 sleeping
CPU 0:  0.0% user,  0.0% nice, 56.3% system,  0.4% interrupt, 43.4% idle
CPU 1:  0.4% user,  0.0% nice, 23.0% system,  2.7% interrupt, 73.8% idle
CPU 2:  3.5% user,  0.0% nice,  2.7% system, 16.0% interrupt, 77.7% idle
CPU 3:  0.4% user,  0.0% nice, 26.2% system,  1.6% interrupt, 71.9% idle
CPU 4:  0.4% user,  0.0% nice, 68.4% system,  5.1% interrupt, 26.2% idle
CPU 5:  0.0% user,  0.0% nice, 57.4% system,  1.2% interrupt, 41.4% idle
CPU 6:  0.4% user,  0.0% nice, 57.4% system,  0.4% interrupt, 41.8% idle
CPU 7:  0.0% user,  0.0% nice, 69.5% system,  0.0% interrupt, 30.5% idle
Mem: 142M Active, 313M Inact, 763M Wired, 305M Buf, 6643M Free


So to recap :

LAN client -> ice1 LAN Opnsense (iperf3 -s) -> 25gpb/s easily
LAN client -> ice1 -> ice0 (WAN) -> Internet iperf3 -> Dropped from 24-25 to 8-9


At this stage, I'm not sure if this is Opnsense or my ISP...






#4
Quote from: Monviech (Cedrik) on May 08, 2025, 05:48:40 PM
Quote from: bugacha on May 08, 2025, 05:42:49 PMI read this thread and I still don't understand few things.


I use Unbound as DNS and don't want to change to Dnsmasq.

I'm all in favor to drop ISC DHCP and migrate to Kea but I need Router Advertisement support for IPV6.

What are my options ?


That is easy, you use:
- Services/Unbound DNS
- Services/Kea DHCPv4
- Services/Router Advertisements

Apologies, I also use DHCPv6 and RA runs in Assisted mode today.

It's a standard setup, I get IPV6 prefix from ISP and I use DHCPv6 to assign IPs from one of the subnets.
#5
I read this thread and I still don't understand few things.


I use Unbound as DNS and don't want to change to Dnsmasq.

I'm all in favor to drop ISC DHCP and migrate to Kea but I need Router Advertisement support for IPV6.

What are my options ?
#6
I added few backends to Upstream, then used them in SNI based routing. After I finished testing, removed SNI based routing but enable to remove unused Upstream.

Just getting "Item in used by" message

Nginx - {Nginx.sni_hostname_upstream_map_item.5dbfd85e-f1aa-43ca-bfa2-313a684a199c}
Item is not referenced anywhere 100%, nothing at all in nginx.conf

Tried to manually grep files in /etc or /usr/local/etc - no matches at all.

Nginx restart or reboot didn't help at all




Any ideas how to fix this ?
#7
i7-14700
Proxmox
Opnsense 25.1.1
Mellanox ConnectX-4 Lx
1 port passthrough WAN
1 port bridged LAN into opnsense and TrueNAS


speedtest from TrueNAS

# bin/speedtest -s 43030

   Speedtest by Ookla

      Server: Init7 AG - Winterthur (id: 43030)
         ISP: Init7
Idle Latency:     1.44 ms   (jitter: 0.08ms, low: 1.42ms, high: 1.70ms)
    Download: 23453.57 Mbps (data used: 23.0 GB)
                  3.30 ms   (jitter: 3.67ms, low: 1.13ms, high: 26.84ms)
      Upload: 22000.78 Mbps (data used: 22.7 GB)
                  1.25 ms   (jitter: 0.11ms, low: 1.08ms, high: 1.94ms)
 Packet Loss:     0.0%

https://www.speedtest.net/result/c/db97cbad-a4d3-4d23-af27-980535ffbe23
#8
Quote from: Netfloh on April 24, 2024, 10:36:00 PMTo finish this up ...

The Intel E810-XXVDA2 needs some developer love to work with OPNsense, I change to Broadcom P225P and this card works out of the box without compiling  kernel or ports.

Thanks Netnut for your help !

Did you try 25.1 ?

ice_ddp driver has been upgraded to latest version
#9
24.7, 24.10 Series / Re: Intel ice_ddp Package 1.3.41.0
February 15, 2025, 08:25:23 PM
So I'm on 25.1 and I get same error for ice_ddp.ko in dmesg

# dmesg | grep ice_ddp
ice0: The DDP package module (ice_ddp) failed to load or could not be found. Entering Safe Mode.
ice0: The DDP package module cannot be automatically loaded while booting. You may want to specify ice_ddp_load="YES" in your loader.conf
ice1: The DDP package module (ice_ddp) failed to load or could not be found. Entering Safe Mode.
ice1: The DDP package module cannot be automatically loaded while booting. You may want to specify ice_ddp_load="YES" in your loader.conf

EDIT: Actually never mind, I added both of these :
ice_ddp_load = YES and if_ice_load = YES in Tunnables

and by the looks of it, everything works now :

ice0: <Intel(R) Ethernet Network Adapter E810-XXV-2 - 1.43.2-k> mem 0x380000000000-0x380001ffffff,0x380002000000-0x38000200ffff irq 16 at device 0.0 on pci1
ice0: Loading the iflib ice driver
ice0: DDP package already present on device: ICE OS Default Package version 1.3.41.0, track id 0xc0000001.
ice0: fw 7.3.4 api 1.7 nvm 4.30 etid 8001b891 netlist 4.2.5000-1.14.0.2b9b23c0 oem 1.3415.0
ice0: Using 8 Tx and Rx queues
ice0: Reserving 8 MSI-X interrupts for iRDMA
ice0: Using MSI-X interrupts with 17 vectors
ice0: Using 1024 TX descriptors and 1024 RX descriptors
ice0: Ethernet address: 50:7c:6f:79:ca:e8
ice0: ice_add_rss_cfg on VSI 0 could not configure every requested hash type
ice0: PCI Express Bus: Speed 16.0GT/s Width x8
ice0: Firmware LLDP agent disabled
ice0: Link is up, 10 Gbps Full Duplex, Requested FEC: None, Negotiated FEC: None, Autoneg: False, Flow Control: None
ice0: link state changed to UP
ice0: netmap queues/slots: TX 8/1024, RX 8/1024
#10
24.7, 24.10 Series / Re: Intel ice_ddp Package 1.3.41.0
February 05, 2025, 09:57:57 AM
Does 25.1 now fully support ice_ddp driver ?