intel e810 qsfp28 breakout with 8x virtual lan interfaces?

Started by neo42, September 01, 2023, 12:47:05 AM

Previous topic - Next topic
Hey everyone,
I am in the process of designing a very fast machine for use as a transparent bridge within a multi 25gig network. Since in bridge mode vlans dont work, i am going to split the existing vlans into individual ports on the switch side and loop them through the Opnsense. Since i need more than the 2 vlans (2P = in and out per vlan) and only have a slot for 1 card in the 1u server the e810 xxvda4 4x sfp28 ports are not enough.

There is a version of Intel e810 that hat 2x qsfp28 (100gbit). For this breakout cables do exist that split qsfp28 into 4x sfp28.
Does anyone have experience with the new e810 Intel cards and if they work in breakout mode (as 4 individual lan interfaces) ? In theory it should give me 8x sfp28 interfaces that can be bridged.
Intel Website says the card is fully supported but well, that is a rather uncommon usecase.

Thx!

I don't have experience with the E810 adapters.

But there was a similar question on the Intel forum:

https://community.intel.com/t5/Ethernet-Products/E810-does-is-support-breakout-cables/m-p/1463312

Although it is about a single port adapter. They referred to a Feature Support Matrix which shows "Table 2. Media Types Supported for the E810" that mentions "QSFP28 Direct Attach Copper breakout cables" under "25 GbE Media Supported".

Quote(— "X" = Supported with Intel® NVM and software driver. — "SNV" = Supported but Not Validated)

In my research I found these two models:

E810-CQDA2:
https://ark.intel.com/content/www/us/en/ark/products/192558/intel-ethernet-network-adapter-e810cqda2.html

E810-2CQDA2
https://ark.intel.com/content/www/us/en/ark/products/210969/intel-ethernet-network-adapter-e8102cqda2.html

According to the product brief the first one (E810-CQDA2) can only do 100 GBit/s at a time but the second one (E810-2CQDA2) can do 200 GBit/s.

QuoteThe E810-CQDA2 has eight MACs (Media Access Controller) that can be setup in different port configurations.

Total throughput of the adapter is 100GbE for all configurations.
https://www.intel.com/content/www/us/en/support/articles/000093702/ethernet-products/800-series-network-adapters-up-to-100gbe.html

So you need:

  • 1x Intel E810-2CQDA2
  • 2x Intel XXV4DACBL1M - QSFP28 to 4x SFP28 breakout cable, 1 m, or similar
  • 2x Intel XXV4DACBL2M - QSFP28 to 4x SFP28 breakout cable, 2 m (alternative), or similar
  • 1x mainboard that supports PCIe 4.0 x16
According to this page the ice driver is supported since OPNsense 22.7:

https://www.thomas-krenn.com/de/wiki/OPNsense_Netzwerkkarten-Treiber

(correction: It is already supported since 21.7: https://forum.opnsense.org/index.php?topic=24302)

But to ultimately verify that it works you probably have to test it yourself, it's only about 1,400 EUR / $1,500. :D

There is also important information in this thread:

QuoteThis adapter is essentially 2 adapters on a single board.. For the card to work fully, it needs to be in a x16 physical slot that is bifurcated into 2 x8 slots electrically.

This will then show as 2 adapters to the system and there will be 1 port for each. If the slot in the system is not capable of bifurcation, it will only see half of the card and only 1 port will work.
https://community.intel.com/t5/Ethernet-Products/When-enable-Intel-E810-2CQDA2-to-50G-2-mode-only-one-port-can-up/m-p/1465333

So you need a mainboard that not only has 16 PCIe 4.0 lanes but can also bifurcate the port in 2x x8 ports.

First of all, thank you vpx23!

Little Update from my side:

I went with the E810-CQDA2 , before reading vpx23´ answer :/ and overlooking the limitation in my own research.

I can confirm that the card is fully working with the ice driver of OPNsense 23.7. It is also possible to turn on the "breakout" mode eather with the epct(64e) tool using the cli or on most mainboards directly.
I am using an ASRock Rack B650D4U-2L2T/BCM  with a ryzen 9 7950x.

With the current card i can turn on the 2x4x10G Mode which results in 8 interfaces (ice0-7) that work with "BlueLan 100GBASE-CR4 QSFP28 to 4x25GBASE-CR SFP28 Direct Attach Breakout" cables. The single links are limited to 10G in that case. It is enough for my needs since i mainly needed 8 interfaces.

Unfortunately the manual of the ASRock Rack B650D4U-2L2T/BCM does not speak of bifurcation, but there is an option inside the AMD PBS submenu called PCIE/GFX Lanes Configuration that can be set to x8x8. Since the board does not have another 8x pcie slot I am hopefull the E810-2CQDA2 will work in that and allow 2x4x25G then.
I might be able to confirm that at some point, since i am building 3 systems.

Quote from: neo42 on September 26, 2023, 02:24:04 AM
First of all, thank you vpx23!

You're welcome!

Quote from: neo42 on September 26, 2023, 02:24:04 AM
Unfortunately the manual of the ASRock Rack B650D4U-2L2T/BCM does not speak of bifurcation, but there is an option inside the AMD PBS submenu called PCIE/GFX Lanes Configuration that can be set to x8x8.

That sounds like the right thing to me but just ask ASRock to be 100% sure before buying the E810-2CQDA2.