Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Praxis

#1
Not sure about OP, but in my case this is purely an exercise in curiosity+having parts at hand.  Hardware in my case is:

MB: Supermicro X13SAE-F-O
CPU: i7-14700K (E-Cores disabled)
RAM: 2x16GB DDR5-4400
2x Intel SSDs that I can't recall the specs on in RAIDZ-1

None of this was purchased specifically for Opnsense, and definitely doesn't make any sort of logical sense.  Even with a 10G fiber primary, 1G Fiber secondary/VoIP/Guest the thing barely cracks 40% at full tilt ingress/egress.  I don't think it'll quite sustain 25G-FD, but can easily do well more than 10.

I've not started doing additional testing like inter-vlan, rules performance, or IPS due to having a rather strange throughput issue from a single Windows client, specifically with RX (which is definitely outside the topic of this thread, and will be making a separate in which to beg for help).
#2
Not sure if this is quite necro territory, but wanted to post here in case anyone else lands via searching in the future.

mimugmail is absolutely correct in that QSFP breakout is most typically done at the switch, and in the case of nVidia/Mellanox is the only way to accomplish it.

The exception that I'm currently aware of is some of the Intel 800 series do in fact support port configs and breakout.  I'm currently running an E810-CQDA2 with a 4x25 port configuration, and it does actually function, though it hasn't been 100% smooth sailing [understatement].

Some things I've discovered:

Be careful buying used from the usual suspects.  the firmware update process on these cards is quite convoluted, and any non-standard model id's or revisions will have you down a hole searching for specific NVM packages and editing package configs praying it goes through.

Specific to the CQDA series, there are several variants, but the key is that the total throughput of a CQDA2 card in aggregate across both QSFP28 complexes is 100G.  This also means that you're limited in the breakouts you can do.  As an example, my card:
root@central:~ # epct -nic 1 -get
Ethernet Port Configuration Tool
EPCT version: v1.39.32.05
Copyright 2019 - 2023 Intel Corporation.

Available Port Options:
==========================================================================
        Port                             Quad 0           Quad 1
Option  Option (Gbps)                    L0  L1  L2  L3   L4  L5  L6  L7
======= =============================    ================ ================
        2x1x100                       -> 100   -   -   -  100   -   -   -
        2x50                          ->  50   -  50   -    -   -   -   -
Active  4x25                          ->  25  25  25  25    -   -   -   -
        2x2x25                        ->  25  25   -   -   25  25   -   -
        8x10                          ->  10  10  10  10   10  10  10  10
        100                           -> 100   -   -   -    -   -   -   -

Warning: Any changes to the port option configuration will require a reboot before the device will function correctly.


The 2CQDA2 doubles things, so you get 200G across the whole card, but it is both larger physically, and much more expensive.

They rely on the Intel ice driver stack, which is included, but yet again there is a catch.  With no config/tuneables the card will load into a sort of safemode, as it is expecting to load a config package to tune it for whatever role it may be serving.  While it will work in that state, it will have extremely limited functionality.  At a minimum you'd want to set a tunable for ice_ddp_load=YES which will load a default config set shipped with the ice driver.

Outside ALL of that, you'll also have to bounce between controlling card settings with ifconfig, and sysctl, as the full set isn't supported in ifconfig.

This is  a whole lot of typing to say that yes, you can in fact do breakout without a switch.  I can't say that it's a good idea though ;)

I finally registered on the forums intending to post asking for help troubleshooting a throughput issue that exists in a single direction, for a single client, communicating with this very card.  Saw this post while doing due diligence searching, and figured I should share.