Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - toxic

#1
Hello,

I would very much like to move from my current VPN client [legacy] to a VPN instance of type client to benefit of the "Depend on CARP" for my failover routers to actually automagicaly start/stop the openVPN connection whenever they become master/backup

But currently I use the following advanced options for which I can't seem to find a replacement :

pull-filter ignore "ifconfig-ipv6 "
pull-filter ignore "route-ipv6 "
route 10.0.0.0 255.0.0.0 net_gateway
route 172.16.0.0 255.240.0.0 net_gateway
route 192.168.0.0 255.255.0.0 net_gateway


My limited understanding is that this VPN is setup on server side to push me a config where everything is routed through this VPN which messes things up on my router.
So my first 2 lines are there to not get ipv6 at all from this VPN because it really messes my ipv6 where for some unknown reason my ipv6 traffic from the FW itself selects the openVPN ipv6 as source ip but goes out through my WAN interface instead of the openVPN one. I would have prefered the FW to select my LAN ipv6 and send it out the WAN interface as I setup my ISP router to delegate me the /64 I have setup on my LAN...

Maybe I could replace those with some other routes I can't figure out in order to keep ipv6 connectivity (which I don't really need) on this vpn and still ensure the firewall uses my WAN to pass all ipv6 traffic except the one from the /48 subnet that is given to me by my VPN provider.

The other one I really believe the VPN provider is pushing something I don't know about that messes all routing and is needed to allow me to have internet over ipv4 using my ISP WAN also.

I have tried the options to "route-no-pull" and "route-no-exec" but I'm really lost...
Is there a way to use a "VPN instance" of type client and continue passing these options (even manually in a config file to verify it works first), and if not are you able to help me figure out other settings that could help me ?

To be honest, I was not able to find a difference between netstat -rn with my VPN up or down even when I left out the pull-filter aboves, so I feel openvpn is messing with routing in some way and I'm too dumb to see it and therefore fix it... What is the actual way to look at all routing tables in opnsense including whatever openVPN has added/modified ?

If anyone can be of assistance that would be great ! thanks in advance
#2
Hello,


I have a virtualized opnsense router and can't seem to manage to get decent performance while routing packets between vlans.


On PvE I defined vmbr0

auto vmbr0

iface vmbr0 inet manual

        bridge-ports bond0

        bridge-stp on

        bridge-fd 0

        bridge-vlan-aware yes

        bridge-vids 1-4094

        pre-up ethtool -G bond0 rx 1024 tx 1024

        pre-up ethtool -K bond0 tx off gso off

        post-up ethtool -K vmbr0 tx off gso off

#Bridge All VLANs to SWITCH



Now I pass vmbr0 to my opnsenseVM as virtio, it extracts vtnet0_vlan2 and vtnet0_vlan3 properly, serves DHCP properly, and routes traffic between the vlans according to fw rules.


For testing I use an LXC attached to vmbr0 using vlan tag 3, and the PvE host itself attached to vmbr2 as follows

auto vmbr2

iface vmbr2 inet static

        address 10.2.2.2/24

        gateway 10.2.2.1

        bridge-ports vmbr0.2

        bridge-stp on

        bridge-fd 0

        post-up   ip rule add from 10.2.2.0/24 table 2Vlan prio 1

        post-up   ip route add default via 10.2.2.1 dev vmbr2 table 2Vlan

        post-up   ip route add 10.2.2.0/24 dev vmbr2 table 2Vlan

        pre-up ethtool -G vmbr0.2 rx 1024 tx 1024

        pre-up ethtool -K vmbr0.2 tx off gso off

        post-up ethtool -K vmbr2 tx off gso off

#VMs bridge



I have in opnsense the settings to disable everything: CRC offloading, TSO, LRO and VLAN offloading as well.


All CPU monitoring I can do show that during an iperf3 across vlans there is ample idle time on all CPU (80%) on all 3 nodes involved (it's a homelab nothing else is stressing anything here)


And yet I get 800-900MB/s when crossing vlans...

On the same vlan I get 18-19GB/s

I also managed to get 12GB/s from one VLan to the router but that was only by enabling the CRC offloading in the opnsense virtual router... But enabling CRC offloads breaks inter-vlan communication, the same opnsense VM, no rules changes, CRC offloaded = 12GB/s in one VLan but no Vlan 2 to 3 communication possible, or CRC not offloaded and only 850MB/s...


I'm getting stuck...

The HW NIC behind bonds is an Intel I225V-rev04, it's alone in the bond, later it will be bonded with a gigabit real Tek in case I plug the cable in the wrong NIC


If you have any ideas as to how I should set it up to achieve>10GB/s between VMs and LXCs regardless of the VLAN I put them on, anything would be helpful here I think.


Thanks for the reading and thanks in advance for any idea!
#3
Hello,
I have y opnsense with unbound serving dhcp leases for the "lan." Zone so when I plug my desktop it's then accessible at "desktop.lan"
Great, no I added a remote location (parents house) with it's dedicated unbound and dhcp that is fully working for "parents.lan" , so when I'm there I can easily dig and find the IP of dad.parents.lan and mom.parents.lan
I have setup IPSEC between the 3 and we are on separate subnets so now we can use each other IPs and we get connectivity transparently though the IPsec VPN.

Now when I'm home on my own opnsense  I can't resolve dad.parents.lan (it's empty) as it's a dhcp lease only on the unbound of my parents not mine.
I tried this in /sur/local/etc/unbound.opnsense.d/parents.conf

server:
forward-zone:
  name: "parents.lan"
  forward-addr: 192.168.1.90

I also tried with .parents.lan. parents.lan. .parents.lan and even lan. with no luck (starting with a dot makes unbound refuse to start...)
192.168.1.90 being the IP of the parents opnsense of course.

So I can't find a way for my unbound to forward queries to the parents unbound when the query is about *.parents.lan

Any help will be appreciated.

I do believe that the other way around will be impossible though, make the parents DNS try to answer with local data first then query my unbound for *.lan but that I can live with 😉
But having mine ask the parents DNS for *.parents.lan should work I just don't find how...

Thanks in advance
#4
General Discussion / Random Networking issues
January 22, 2024, 10:53:50 AM
I know y issue is probably on my proxmox networking so maybe stuff more for linux fans than opnSense, but the network experts I know of live around ehre so I try here ;)

I'm facing some strange random networking issues with LXCs on my PVE cluster not able to communicate.


For instance, sometimes, 10.0.10.51 which is a LXC will not be able to communicate with 10.0.1.23 which is one of my switches.

When this occurs, I see no trafic at all coming in on the gateway (using opnSense, made a packet capture, nothing there), meaning the trafic is not leaving the LXC or not leaving the network bridge. I think I did try a packet capture on the pve host of the lxc and did not see any trafic on the vmbr10 either...

I see that often thanks to my uptime-kuma instance runing on this LXC, and can't really understand why, there is a timeout (60 secs) during which uptime isn't able to either ping or curl http the switch, and doing nothing it starts working again a few minutes later...


The LXC in question is a ubuntu jammy attached with a static ip to vmbr10, the pve host is running v8.1.3 on kernel 6.5.11-7-pve.


While this is occurring, I can reproduce using ssh onside the LXC and communication is indeed down, and during this time I was able to ssh onto my opnsense gateway and confirm it is indeed able to ping or clurl the switch no problem, so were my opnSense to recieve the packets from the LXC it would pass them along correctly...


Uptime is running inside docker inside the LXC and I do believe I have similar issues within docker networking itself (some containers timeout between my traefik instance and the gitea container itselft for example...) but that seems unrelated as within docker itself...


The host is a 8365U so powerfull enough, it's sitting arount 30%CPU usage, no swapping with the 32GB of RAM I added, it is quite busy running around 100 containers total, some in LXCs, some in VMs, but overall no slowness or anything besides these random network dropouts*


I recently tries to increase ulimit -n 99999 (it was 1024 everywhere) but it doesn't seem to do any better...


Any idea ?


Here is my /etc/network/interfaces :


auto lo

iface lo inet loopback


auto enp1s0

iface enp1s0 inet manual

        mtu 9000

#eth0


auto enp2s0

iface enp2s0 inet manual

        mtu 9000

#eth1


auto enp3s0

iface enp3s0 inet manual

        mtu 9000

#eth2


auto enp4s0

iface enp4s0 inet manual

        mtu 9000

#eth3


auto enp5s0

iface enp5s0 inet manual

        mtu 9000

#eth4


auto enp6s0

iface enp6s0 inet manual

        mtu 9000

#eth5


iface enx00e04c534458 inet manual


auto bond1

iface bond1 inet manual

        bond-slaves enp5s0 enp6s0

        bond-miimon 100

        bond-mode balance-xor

        bond-xmit-hash-policy layer3+4

        mtu 9000

#LAGG_WAN


auto bond0

iface bond0 inet manual

        bond-slaves enp1s0 enp2s0 enp3s0 enp4s0

        bond-miimon 100

        bond-mode balance-xor

        bond-xmit-hash-policy layer3+4

        mtu 9000

#LAGG_Switch



auto vmbr1000

iface vmbr1000 inet manual

        bridge-ports bond0

        bridge-stp on

        bridge-fd 0

        bridge-vlan-aware yes

        bridge-vids 1-4094

        mtu 9000

#Bridge All VLANs to SWITCH


auto vmbr2000

iface vmbr2000 inet manual

        bridge-ports bond1

        bridge-stp on

        bridge-fd 0

        bridge-vlan-aware yes

        bridge-vids 1-4094

        mtu 9000

#Bidge WAN


auto vmbr1000.10

iface vmbr1000.10 inet manual

        mtu 9000

#VMs


auto vmbr1000.99

iface vmbr1000.99 inet manual

        mtu 9000

#VMs


auto vmbr10

iface vmbr10 inet static

        address 10.0.10.9/24

        gateway 10.0.10.1

        bridge-ports vmbr1000.10

        bridge-stp off

        bridge-fd 0

        post-up   ip rule add from 10.0.10.0/24 table 10Server prio 1

        post-up   ip route add default via 10.0.10.1 dev vmbr10 table 10Server

        post-up   ip route add 10.0.10.0/24 dev vmbr10 table 10Server

        mtu 9000


auto vmbr99

iface vmbr99 inet static

        address 10.0.99.9/24

        gateway 10.0.99.1

        bridge-ports vmbr1000.99

        bridge-stp off

        bridge-fd 0

        post-up   ip rule add from 10.0.99.0/24 table 99Test prio 1

        post-up   ip route add default via 10.0.99.1 dev vmbr99 table 99Test

        post-up   ip route add 10.0.99.0/24 dev vmbr99 table 99Test

        mtu 9000





I do have the proper tables created I believe :


Code (bash) Select
root@pve:~ # cat /etc/iproute2/rt_tables.d/200_10Server.conf

200 10Server

root@pve:~ # cat /etc/iproute2/rt_tables.d/204_99Test.conf

204 99Test

root@pve:~ #


Thanks in advance for any help or ideas on how to fix it ;)
#5
23.1 Legacy Series / [SOLVED] CrowdSec with TLS
July 05, 2023, 02:26:28 AM
Solved : turns out it was a bug in crowdSec v1.5.1 and someone kindly built me a 1.5.2 for BSD that solves it.

I'm stuck, I can manage to get crowdsec working with my private CA emitting certificates on a docker setup, but putting it inside the opnSense plugin fails.

Essentially, I am sure whatever context crowdsec runs it is not trusting my CA on opnsense, it says so in the logs :
time="05-07-2023 02:23:38" level=error msg="error while performing request: tls: failed to verify certificate: x509: certificate signed by unknown authority; 4 retries left"

But in fact, I have signed certs with this ca for other machines on the network, and when I use curl from the very same opnSense machine to use https on a server that has a cert signed by my internal CA, it does work properly and recognize the CA... I did have to import my CA in Trust->Authorities for that, sure but now this at least works.

But somehow, crowdsec seems not to use it..

Any idea how to add a CA cert to be trusted by crowdsec ? even looking at the plugin code I can't find what's missing on my machine to have crowdsec trust my CA...
#6
Hello.
Seems I have a similar issue as in this thread :https://forum.opnsense.org/index.php?topic=22942.0

I'm running the latest opnsense all updates done, inside a VM running on latest proxmox.
Trying to start suricata (Intrusion Detection) I get this in the logs :
opening devname netmap:vtnet0/R failed: Invalid argument
And the service stops.

I am using a "virtio (paravirtualized)" as the model of networking card for this VM.
I do not wish to pass the real underlying physical NIC (an intel E1000) as I configured it nicely in proxmox (was painful enough once) and I expect to upgrade the machine and the NIC in the future...

Do you believe it's a NIC issue with this paravirtualized model ?
I could choose a "VMWare vmxnet3" model easily in proxmox, but doing this the name of the interfaces in opnsense will change and I'm having issues to re-map them, especially since I have many VLANs and such.

Maybe someone can give me a trick to take a backup, manually rename vtnet0 and vtnet1 to whatever it's new name will be when it's a vmxnet3, that I could do, but without networking, how to pass the file to the opnsense VM, and how to reload the backup without networking...

Thanks in advance for any kind of help.
#7
General Discussion / Migrate domain .lan to .local
February 18, 2023, 06:19:47 PM
Hello,

I first installed my opnSense a few years ago and I chose to have my LAN on a domain called ".lan", but now I hate myself as most of the time browsers don't know this tld and direct me to google or my default search engine when I type router.lan or server.lan in the address bar... unless I explicitely tell them https:// or http:// in front...

It's a pity as .lan is much faster to type as .local, but hey, no I've seen that most browsers know and deal properly with .local

Do you know of an easy way for me to switch to .local ? I'd really like something to keep resolving .lan by simply trating anything.lan as a CNAME of anything.local so my existing setups continue to work the time for me to update all my configs, like my /etc/fstab, my reverse proxies... if it's not CNAME and still myserver.lan gets resolved the same way as myserver.local I'd be happy ;)

Also to note : I have 2 opnSense doing CARP failover and syncing their conf...

if you know a better alternative to .local that would keep working with devices that are trying to use DNSsec or google's DNS like my android phones, feel free to share as well, I'd still like to keep it contained in my opnSense boxes.

Thanks in advance for any input !
#8
General Discussion / debugging pfsync
June 05, 2022, 02:13:20 AM
Hello,

a long time ago I got states to be synchronized between my 2 firewalls both running latest opnsense and similar hardware.

They are both running on proxmox with a virtual NIC that is a linux bond on the host, WAN has no VLAN but I have several VLANs for LAN, the bond on proxmox trunks them all and I devined all the vlans on the virtual interface in opnsense.

All seems to work well, again I had it working a while ago and can't find what I did to break it...

I have CARP failover that works (althought all sessions gets killed since no states are synced)
I have even recently tried to add the pfsync0 to the carp group (no idea what it does...)
:
main$ifconfig pfsync0
pfsync0: flags=41<UP,RUNNING> metric 0 mtu 9000
        pfsync: syncdev: vtnet1_vlan9 syncpeer: 10.0.9.3 maxupd: 128 defer: off
        syncok: 1
        groups: pfsync carp

backup$ifconfig pfsync0
pfsync0: flags=41<UP,RUNNING> metric 0 mtu 9000
        pfsync: syncdev: vtnet1_vlan9 syncpeer: 10.0.9.2 maxupd: 128 defer: off
        syncok: 1
        groups: pfsync carp


I have all interfaces in the same order, the same underlying name...

on both I setup the Synchronize Peer IP properly and they both use the same syncdev as you can see.
But when I do a tcpdump on this interface, vtnet1_vlan9 interface, I only see CARP traffix (heartbeats or what they are called in CARP world...)
I see nothing on UDP or PFSYNC protocol that would share states.

In the GUI, I see something strange in interface overview, see the attachment below. I get the same on both frewalls, only errors (and different number of errors but hey...)


If someone has any clue as to a way to debug pfsync0, see what are these erros, maybe get some logs...

I wanted to crast a "fake" pfsync packet and send it out trough vtnet1_vlan9 just to make sure it's not being blocked by some unknown rule that isn't logging, but I don't see many packets being blocked, and I do have a rule on the proper interface to allow any to any on protocol PFSYNC...

Any help in investigating is realy welcome !
#9
Hello, I am probably just misunderstanding how network should work and I would like some help.
I have 2 separate networks living on separate vlans, opnsense is router on both vlan and gateway to internet as it's the only one connected to my ISP box. Lan1 is 10.0.10.0/24 lan1 is 10.0.20.0/24
I run gitlab with a container registry on 10.0.10.10 and have it properly exposed to internet with a fqdn, certificate, and port forwarding.
Now when I pull from this repository I use it's public name which returns my wan address, and it works fine when Ithe computer that pulls is itself on internet, works fine as well if the computer is on lan2 10.0.20.101 for instance.
But it simply times out when I use it from any host on 10.0.10.0/24, except for 10.0.10.10 itself as for him I probably put something in etc hosts to use the local loopback for all registry.gitlab.example.com

My understanding is that on 10.0.20.101 when I pull from registry.gitlab.example.com it sends out packets to my public IP, Port forwarding works fine and opnsense is probably doing NAT automatically. As you can see my understanding is probably incomplete here.
But from 10.0.10.101 when I try to pull from registry.gitlab.example.com, my understanding is that TCP packets go out to my public IP, they are properly send by opnsense to 10.0.10.10 using my Port forwarding as I see them arrive on the 10.0.10.10 server in the gitlab logs. But I guess 10.0.10.101 revives an answer directly from 10.0.10.10 for a TCP session it's trying to open with a public IP so it gets confused and just ignore the answer from 10.0.10.10 wait for an answer from a public IP that never arrives as it was just ignored.

Maybe my understanding is wrong but I welcome any help.

I do have a similar issue for other services that run on the same server on lan2.
I would really like my computers to all use DNS names and use the same names regardless of which network they are connected to, and I would rather have it done with NAT correctly and hopefully avoid having to run a DNS that manages to give different answers to the same query depending on who's asking.

I have tried to add a manual NAT rule on lan2 for my public IPs, it didn't seem to work, maybe because it's not the good solution, maybe because I have several WANs so several public IPs that I keep in a hosts alias (NAT rules should work with aliases, no?)

My gut feeling is really that NAT is the issue but I can't get my head around a solution. I like that no nat is performed between lan2 and lan1 so logs in gitlab show the real IP of the user, but I think within lan1 I need something so both the IP request and answer go through the gateway and not directly between the hosts probably staying on the switch. Or at least find something to not ignore the response;)

Any help is welcome;)
Thanks in advance !m and thanks for reading;)
#10
Maybe I just did not understand, but I'm facing an issue where cross-VLAN communication is being difficult simply because my 2 computers are not on the same subnet...

I have VLAN10 using 10.0.10.0/24 and VLAN30 using 10.0.30.0/24

I would like to have each device believe they are part of 10.0.0.0/16 while in fact I can use fw rules to restrict each VLAN to it's matching /24.

Right now, I can pass and filter trafic between the 2 VLANs as I want since except on my opnSense router, both VLANs are entirely separate.

But opnSense does not allow me to serve DHCP on a subnet that the interface is not part of (like keep opnSense on the /24 but tell the DHCP clients they are on /16), and I believe I'll face other issues if I set the 2 VLAN interfaces in opnSense to be on the same subnet...

The end-goal behind that is for example that windows devices on these 2 VLAN believe they are on the same subnet and therfore are visible in the "network neighboorhood", allthewhile enabling me to say "VLAN10 has no access to internet" while VLAN30 has access to internet.

I got this last part working with 1 subnet by VLAN interface, which breaks the windows network discovery (I've tried, WS-discovery, Netbios, WINS, Samba master browser, I'm just not ready to setup a windows server with AD just for that...)

Is what I'm considering a real option (having 2 interfaces on the same subnet and "routing" between them ?)
If yes, some guidance on how to do it would be nice ! (static routes for each /24 ?)

Thanks in advance for any help !
#11
Hello,

I've followed the guide at https://docs.opnsense.org/manual/how-tos/carp.html and I do get my nodes switching from backup to master and reverse.

But I don't manage to get my states synced...

The only difference I see is that they communicate on a dedicated network, but they are not alone on that network, other nodes are present that will not participate in CARP, but they are my switches and so on on their management interface, so I have no fear for security on this VLAN, but it is a VLAN and not a direct cable between the 2 boxes...

Is that a known limit of CARP that I couldn't see ?

Or am I missing something ??

On master, I can get several thousands of states when the backup only has less than a hundred...
And when I start an ssh session between 2 machines and filter the states, I do see it on the active master but never on the backup, therefore upon shutdown of master the ssh session breaks... But I can start a new one, so the CARP IP for the gateway has indeed changed.

InInterface/ VIP/Status I see this on the master :
pfSync nodes
0c4b8edd
80503395
b47f8193
b95fd8c4
8f126435
7f757b24
a87f1071
42def9a3
01de2c4c
fb7c4474
786810a3
9aa1708f
8c3ee092
bc06947f
fb97c1f7
5d2d9f63


and only this on the backup :
pfSync nodes
f5c1236f
41242b9c
eb4db7be
d965f991
05676a28
89c378c8
d7408ebe
f8903274
d662c183
c64890b4


My pfsync interface is opt9 (vlan 9 on vtnet0).
Both master and backup are using virtual NICs passed by proxmox, physically they are all Intel I211 and on both proxmox hosts they are bridged together in a bond with strictly the same setup (4 ports static LAG, tested and working iperf3, or smb file transfers...)

Thanks in advance for any pointer as to what could cause this.

Regards,
Toxic.

Edit: I just checked, I understand CARP is doing broadcast, so I verified but all IGMP snooping is disabled from my switches.
#12
High availability / CARP for third-party machines ?
March 29, 2021, 02:15:08 PM
Hello,

I'm fairly new to HA, but I've managed to setup a failover opnSense router (that's simple as hell, thanks opnSense community !) using CARP, I have two opnSense VMs running on 2 distinct proxmox hosts, and if I shutdown one, the second takes over, very good !

I'm now looking to do something similar for my container running traefik (mainly as a proxy) : IMy idea is to Portforward wan trafic to a VIP, and have opnSense assign this VIP to one or the other of the hosts depending on which one is available.

I'm not sure I'm willing to setup CARP on the CT itself (wouldn't know where to start, all I did was the nice opnSense gui...), but my setup being quite simple, I could tell opnSense : if the opnSense main node is master on the gateway VIP, then the main traefik should have the proxy-VIP, and if the failover opnSense has the CARP gateway VIP, then the failover proxy-IP should be the one that the proxy VIP points to.

Is there a way to do this in opnSense ? Or do I need to dig into each CT to insure they grab the VIP when necessary ?

Thank you in advance for any help !
#13
Hello,
============
[Edit] : Seems I solved my issues by simply adding a routing table and adding a rule to say traffic from 10.0.10.0/24 uses this routing table that only has a default route to the gateway.
I just need to make sure that survives a reboot now... But that's realy a debian question then and not anymore a networking issue...
============

I'm realy new to networking it seems since it took me a while to understand why my ssh connection is dropping off, in fact, my client is going through the gateway but the return packets are coming directly since the server knows of a more direct route.

Now I could cut off the direct route alltogether but in fact I like this route in case my gateway goes down, not that opnSense is unstable but it's actually a VM that I sometimes shutdown...

So the server has these routes for now :
# ip route show
default via 10.0.10.1 dev vmbr10 proto kernel onlink
10.0.10.0/24 dev vmbr10 proto kernel scope link src 10.0.10.9
10.0.11.0/24 dev vmbr0.11 proto kernel scope link src 10.0.11.9
10.0.30.0/24 dev vmbr0.30 proto kernel scope link src 10.0.30.9


and that's true for all 3 last routes : I would like for the default route to be preffered to the last 3 "direct" routes since in fact when the 10.0.10.1 gateway is up it will work just fine and as you see it breaks some things to keep the other routes when gateway is up... ( that's because my client has a 10.0.30.0/24 IP and is contacting the server on his 10.0.10.0/24 IP, so client to server goes through the gateway and return trip is direct since servers already lives on 10.0.30.0/24, but that bypasses the gateway and the next packets are then dropped since the TCP state has been killed seeing no traffic...

I think there is a "weight" mechanism, but not sure how it would indeed detect that the 10.0.10.1 gateway is down...

Any hep in setting up this debian(proxmox) server to always prefer the gateway over the other known routes would be greatly appreciated, info on how gateway status is evaluated is also welcome !

And sorry if you feel hurt I ask debian-like questions on opnsense forum, that's where I usually find the most useful networking help ;)

Thanks in advance,

Regards

Edit: looking up route weight, it seems it's not what I need... In fact, I want some failover of routes... can we change the routing table if a CARP VIP is free for example ? In fact, vmbr10 will never be down since it's a bridge with a virtual link to the gateway, and physical to the failover gateway... but both gateways that are fighting for the CARP VIP might be down (with my skill in opnSense that happens more often than I wish, and then this direct route is my last resort to access proxmox and rescue the situation...)
#14
Hello,

I'm getting desperate, I need help to find a setup where 2 windows clients can download files from my NAS using SMBv3 both at 1GB/s at the same time for a total 2GB/s sent from the NAS...

I've tried a lot of things and got LAGG working between lots of parties achieving 2GB/s several times but never for end-to-end from laptop to NAS.

What I have :

  • Box of my ISP recieving fiber internet 5GB/s but having only 3x1GB/s ethernet ports to use it (esentially max 3GB/s then...)
  • NAS Synology DS918+ with 2x1GB/s
  • Switch TP-Link SG1024DE 24 ports but only static LAGG up to 4 ports per group no LACP
  • Router/FW core i5 8th gen 32GB RAM with 6 integrated I211 intel NICs that is as proxmox VM host for my web services and runs opnSense as a VM
  • Backup router/fw J1900 8GB RAM and 6 I211 intel NICs with bare-metal opnSense (since no VT-d so no pci passthrough, can become proxmox backup if LAGG is done in proxmox more than opnSense)
  • 2 windows laptops plugged on the switch
  • a Wifi AP and lots of other crap connected on the Wifi or switch but these I can manage ;)

What I want :

  • for the 2 laptops to be able to use the windows share on the NAS at full speed at the same time
  • not buying anymore hardware

If you have ideas you can stop reading here and propose it ;) If you have time I'm now gonna tell you what I tried that did not work...

What I would like but can compromise on :

  • only my firewall get access to the ISP LAN side and serve as gateway/firewall
  • ability to use at least two 802.1Q VLANs so my SmartHome things can be forced into one VLAN by the AP (I know how to do it)
  • my firewall(s) should see all traffic coming and going to the NAS (I have loging enabled on all rules to/from the NAS and analyze them with Splunk)
  • Be able to use 2GB/s of my ISP traffic (split into several clients of course)
  • ability to access the proxmox host even when it's main router VM is down for backups for example (backup router could be the gateway if proxmox still has some network access while it's opnSense VM is down/paused)
  • have the core i5 be the primary router and not the physical J1900 box that I could one day repurpose since backup router for high availability is not really a big issue, let's focus on 1 router, I'd like it to be the fastest computer to do more than routing... I like running traefik in docker in LXC on the proxmox host...

I'm at a point where I'm considering the simplest setup that fullfils almost none of the optional wishes : one flat network for all LAN and each firewall a bridge to the ISP(wan) network with CARP VIP. That would work, but I'd be mostly blind in splunk as to what my NAS is doing for my LAN...

What I've tried, focusing on the primary router running in proxmox :

  • "Router on stick" setup where proxmox holds all 6 NICS into 2 bonds (max 4 ports per LAG group on switch) and passes virtual 10GB/s NIC(s) to the opnSense VM : 1 pass 1 VLAN-aware Linux bridge to the VM and setup the VLAN tagging in opnSense. The WAN VLAN had it's own virtualNIC due to LAG limit at 4 ports
  • proxmox holds all the NICs into 2 bonds again, but create Linux bridges in proxmox for each VLAN, then pass 1 10GB/s NIC by VLAN to opnSense VM
  • PCI pass-through of 5 of the 6 NICs to opnSense VM handeling

    • 1 big LAGG of 4 ports to client VLAN and NAS VLAN, one physical port to WAN
    • 1 LAGG of 2 ports to LAN VLAN, another LAGG of 2 ports to the NAS VLAN, one port for WAN
  • PCI pass-through of all 6 NICs to opnSense VM, played again with various LAG groups configs
  • PCI pass-through of 4 NICs (LAGG 2-2, 1-3 or even 4 ports and then VLAN on top of LAGG) plus a "router on stick" setup on the 2 remaining ports bonded in proxmox

In almost all these config, I do get full 2GB/s speeds on several legs of the network (almost all):

  • from the 2 clients running iperf client (normal and reverse mode) to proxmox running iperf (when no PCI pass-through)
  • from the clients again running iperf client to the opnSense VM running iperf server
  • from the opnSense VM and running iperf client (twice) to the NAS in normal mode
  • iperf client running on opnSense VM and a client or proxmox, to the NAS running iperf server
  • NAS running iperf server, opnSense VM running iperf client, and one laptop I plug in the switch on same VLAN as the NAS, bypassing the gateway

In all these cases I am able to get 2GB/s, except...
Cases 4 and 5 show where it breaks : opnSense is able to SEND to the NAS at full 2GB/s, but running iperf client with --reverse, I don't get the full 2GB/s, both iperf clients only add up to 1GB/s...

So I never got to my goal at the very top of this post : 2GB/s from clients to NAS through opnSense...

Any help or idea would be greatly appreciated !

Thanks a lot for your reading and help !
#15
Hello, I did a nice setup and I was expecting that my 2 windows clients could get full gigabit speeds using SMBv3 to my NAS, and somehow they still share the bandwidth, I don't get why...

TLDR: Loadbalancing on a 2 NIC Group without VLAN from router to NAS through the switch is working fine. But Router has another LAG group of 3 NIC to several tagged VLAN on the switch, but on the same VLAN, 2 clients on 2 different ports share a single GB/s to the router despite the router having LAGG...


My setup is :

  • TL-SG1024DE 24ports Gigabit Switch (no LACP...)
  • synology NAS with 2 plugs on static LAGG group on VLAN 11 (untagged&PVID11 on the switch)
  • OpnSense router with 5 NIC : 2 plugged in a static LAGG group on VLAN 11), 3 in another static LAGG group with several VLANs on it, but let's focus on VLAN 30
  • Client A Windows laptop plugged in on the switch in a port set to VLAN30 untagged and PVID 30
  • Client B Ubuntu laptop plugged  in on the switch in a different port also set to VLAN30 untagged and PVID 30

Both client A and B get a proper IP from router R on the VLAN30, each individually is able to copy files using SMBv3 at 100-110 Mo/s when it's the only one do to so.
But when both client A and B are trying to download a file using SMBv3 at the same time, their combined speed never gets to 2BG/s, it stays at 1GB/s only.

2 strange things happens that tell me somehow I'm not so dumb and LAGG is not too badly configured :
- When I plug client B on a different port on the switch that is directly attached to VLAN11 like the NAS, I can indeed hit the NAS from client A and B at the same time using SMBv3 and total bandwith seen from the NAS is 2GB/s => so SMBv3 protocol is not the issue and is fast enough to show that I use LAG to get past 1GB/s is working properly
- Now stranger even : I started an iperf server on the NAS, and when I use iPerf as client on the opnSense router, using --reverse, I am able to get 1GB/s bandwith, and while it is running, I am able to use client A to start a SMBv3 download and also get 110Mo/s at the same time, so the nas realy sees at this time that it has 2BG/s of traffic outgoing ! So LAGG between router and NAS is indeed working and balancing the load allowing 2GB/S if more than 1 client.

Now, last leg that could be not working is the LAG from my router to my switch on the 3 port group to all different VLANS... And indeed : when I start iperf server on the router, use client B as iperf client --reverse, I do see 1GB/s, but I see it drop when client A uses the router also, for example to download the file from the NAS (client A is windows I don't have iperf on it...)

So the issue seems to be that my switch is not able to loadbalance to the 3 port LAG group I setup for connectivity to the router on all my VLANS.

Now, I do use CARP so my router which is the default gateway is beeing contacted throuth the CARP IP and not it's real IP, but I doubt that could be a factor for this LAGG to not work.

I setup the 3ports LAGG pretty much the same way as the 2ports one, only adding VLAN :
- lagg0 type LOADBALANCE on igb2, igb3 and igb4 (somehow the opnSense GUI in the OtherTypes->LAGG show the same MAC address for all these... but is does the same for the 2 port LAG that is working)
- created a butload of VLAN on lagg0
- assign "vlan 30 on lagg0" to lan or opt1, static IPv4

Did I missed something ?

Thank you in advance for your help and thanks a lot anyway for your reading !

If you read this far, you probably want some details :
ifconfig -a : (I left out several lagg0_vlanXX that looked the same anyway)

igb0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c2
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
igb1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c2
hwaddr 00:40:d7:e0:09:c3
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
vtnet0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=800a8<VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,LINKSTATE>
ether 10:10:10:10:10:01
inet 10.0.10.2 netmask 0xffffff00 broadcast 10.0.10.255
inet 10.0.10.1 netmask 0xffffffff broadcast 10.0.10.1 vhid 10
inet6 fe80::1210:10ff:fe10:1001%vtnet0 prefixlen 64 scopeid 0x3
groups: GR_LAN_Servers
carp: MASTER vhid 10 advbase 1 advskew 0
media: Ethernet 10Gbase-T <full-duplex>
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
igb2: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c4
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
igb3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c4
hwaddr 00:40:d7:e0:09:c5
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
igb4: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c4
hwaddr 00:40:d7:e0:09:c6
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
enc0: flags=0<> metric 0 mtu 1536
groups: enc
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x8
inet 127.0.0.1 netmask 0xff000000
groups: lo
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
pflog0: flags=100<PROMISC> metric 0 mtu 33160
groups: pflog
pfsync0: flags=41<UP,RUNNING> metric 0 mtu 9000
pfsync: syncdev: lagg0_vlan9 syncpeer: 10.0.9.3 maxupd: 128 defer: off
groups: pfsync
lagg0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0 prefixlen 64 scopeid 0xb
laggproto loadbalance lagghash l2,l3,l4
laggport: igb2 flags=4<ACTIVE>
laggport: igb3 flags=4<ACTIVE>
laggport: igb4 flags=4<ACTIVE>
groups: lagg
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c2
inet6 fe80::240:d7ff:fee0:9c2%lagg1 prefixlen 64 scopeid 0xc
inet 10.0.11.2 netmask 0xffffff00 broadcast 10.0.11.255
inet 10.0.11.1 netmask 0xffffffff broadcast 10.0.11.1 vhid 11
laggproto loadbalance lagghash l2,l3,l4
laggport: igb0 flags=4<ACTIVE>
laggport: igb1 flags=4<ACTIVE>
groups: lagg GR_LAN_Servers
carp: MASTER vhid 11 advbase 1 advskew 0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan30: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan30 prefixlen 64 scopeid 0xd
inet 10.0.30.2 netmask 0xffffff00 broadcast 10.0.30.255
inet 10.0.30.1 netmask 0xffffffff broadcast 10.0.30.1 vhid 30
groups: vlan GR_LAN_Clients
carp: MASTER vhid 30 advbase 1 advskew 0
vlan: 30 vlanpcp: 2 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan3 prefixlen 64 scopeid 0xe
inet6 2a01:e0a:336:6ea0:240:d7ff:fee0:9c4 prefixlen 64 autoconf
inet 192.168.1.3 netmask 0xffffff00 broadcast 192.168.1.255
inet 192.168.1.2 netmask 0xffffffff broadcast 192.168.1.2 vhid 3
groups: vlan GR_WAN
carp: MASTER vhid 3 advbase 1 advskew 0
vlan: 3 vlanpcp: 2 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
lagg0_vlan9: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan9 prefixlen 64 scopeid 0xf
inet 10.0.9.2 netmask 0xffffff00 broadcast 10.0.9.255
inet 10.0.9.1 netmask 0xffffffff broadcast 10.0.9.1 vhid 9
groups: vlan GR_LAN_Servers
carp: MASTER vhid 9 advbase 1 advskew 0
vlan: 9 vlanpcp: 7 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan11: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan11 prefixlen 64 scopeid 0x10
groups: vlan
vlan: 11 vlanpcp: 3 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan22: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan22 prefixlen 64 scopeid 0x11
inet 10.0.22.2 netmask 0xffffff00 broadcast 10.0.22.255
inet 10.0.22.1 netmask 0xffffffff broadcast 10.0.22.1 vhid 22
groups: vlan GR_LAN_NoAccess
carp: MASTER vhid 22 advbase 1 advskew 0
vlan: 22 vlanpcp: 1 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan40: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan40 prefixlen 64 scopeid 0x12
inet 10.0.40.2 netmask 0xffffff00 broadcast 10.0.40.255
groups: vlan GR_LAN_Clients
vlan: 40 vlanpcp: 2 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan50: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan50 prefixlen 64 scopeid 0x13
inet 10.0.50.2 netmask 0xffffff00 broadcast 10.0.50.255
groups: vlan GR_LAN_Clients
vlan: 50 vlanpcp: 0 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan60: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan60 prefixlen 64 scopeid 0x14
inet 10.0.60.2 netmask 0xffffff00 broadcast 10.0.60.255
groups: vlan GR_LAN_Clients
vlan: 60 vlanpcp: 1 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan1 prefixlen 64 scopeid 0x15
inet 10.0.1.2 netmask 0xffffff00 broadcast 10.0.1.255
groups: vlan GR_LAN_NoAccess
vlan: 1 vlanpcp: 1 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan8: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan8 prefixlen 64 scopeid 0x16
inet 10.0.8.2 netmask 0xffffff00 broadcast 10.0.8.255
groups: vlan GR_LAN_Servers
vlan: 8 vlanpcp: 0 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan99: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan99 prefixlen 64 scopeid 0x17
inet 10.0.99.2 netmask 0xffffff00 broadcast 10.0.99.255
groups: vlan GR_LAN_NoAccess
vlan: 99 vlanpcp: 0 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>


[EDIT] :
I noticed that I don't have any VLAN untagged on this LAG group in the switch, and as such I also had not assigned the lagg0 interface to any optXX. So I thought maybe lagg0 is not "properly enabled", I added an OPT11 to assign lagg0, enabled it but left both IPv4 and IPv6 to "None", and I also did that for a vlan I declared on lagg 0 but never got to use.... So I'm posting the full ifconfig -a just in case...
#16
General Discussion / VLAN on Bridge
March 15, 2021, 09:44:36 AM
Hello,
I will setup about 10 VLAN on my opnSense firewall, and I'm not sure if I should declare the VLAN on the physical NIC, the LAG or the bridge level.

To be clear, I want to use
-  3 1gb/s NIC as LAG to my switch for bandwidth increase between several clients, servers and VLANs.
- 1 10gb/s virtual NIC to my VMs

There will be a VLAN that should live across those 4 interfaces.
I was thinking I should bridge the lag and the vNIC into an big LAN and then create a VLAN on the LAN Bridge.
But I could also create the VLAN on the LAG and on the vNIC and then create a bridge of these 2 VLAN.

I could even VLAN on each NIC and then LAG the VLANs and then bridge...

Not sure if it's clear for you, hope you can provide insight on what should be the best way for performance.

Since I plan to create several VLAN, creating them directly on the bridge I see the only advantage that it will require less declaration in VLAN, only one for each VLAN instead of 3 for each VLAN (LAGG, vNIC then bridge them)

Thanks in advance for your kind help.
#17
I'm trying to understand the consequences of getting rid of my current VLANs on my home network.
I'm not really in need of real network isolation like the VLAN offers and it's too maintenance heavy for me to manage VLAN IDs port by port, and device by device for those who support it, so I'm trying to simplify and see what I'll loose.

Let say I put this in place :

  • OpnSense router with LAN NIC set to 10.0.0.1/16
  • Router is only connected to a L2 switch, all other devices are connected to the switch
  • Server has static IP 10.0.1.2/24
  • Client has static IP 10.0.2.3/24

My current understanding is that when the client contacts the server, 10.0.1.1 beeing outside  it's subnet it will route the packet to it's gateway, the opnSense router will recieve the packets, apply all FW rules, and if it's allowed will send the trafic along to the server.
I think that the switch will not be able to forward to the server directly without the router having seen the trafic since the client itself will have attached the mac adress of it's gateway to the packet... Am I correct here ?

I'm also still looking into a way to actually put my clients into a smaller subnet by DHCP as most of my clients use DHCP, even some servers do but I could switch them to static IP.I'm stil unclear how to, on the same NIC on the router,  have several subnets coexisting even if only one of them has DHCP enabled. Best of the best would be to be able to specify the subnet of each device in the static DHCP entry...

Any advise on how to deal with several subnets on a single NIC is welcome ;)

Thank you in advance.
#18
Hello,

Does anyone know when opnSense will likely upgrade to FreeBSD 13 ?

I'm told the coming FreeBSD 13 will bring support for my USB 2.5G NIC using RTL8156B so I'm quite keen to see if USB NIC can indeed provide me with a speed boost...

I'm not looking for commitment but rough estimates from someone who is more used to opnSense dev cycle since I'm still very new at opnSense. like end 2021 or not before 2022...

So if anyone has any idea when I can expect to see the opnSense update propose me an upgrade that will move me to BSD 13 and give me that driver, I'm all ears !

Otherwise I'll just patiently wait ;)

Thank in advance, !
#19
I'm trying to find a simple way to apply a firewall rule to a range of IPs.
Say my FW interface is set to 10.0.0.1/16 and client A is 10.0.1.55/16 and client B is 10.0.2.55/16
If I add a FW rule to apply on source 10.0.1.0/24 will it match traffic from client A and not from client B ? Or will it not match traffic from client A since netmask is different?

I'm trying to find a reasonable way to apply FW rules to a range of IP, maybe that's na option somewhere else that I didn't find yet.
Thanks in advance for your kind help.
#20
I just came across this today :
https://www.alibaba.com/product-detail/OEM-OPNsense-Pfsense-firewall-hardware-mini_62100677126.html?spm=a2700.wholesale.deiletai6.1.76201d58MNU2sm
Pricing on such devices looks real, another one here :
https://www.alibaba.com/product-detail/POE-mini-pc-intel-3855U-CPU_62015422259.html?spm=a2700.wholesale.deiletai6.5.5e906d6chvJ1Gm

I'm still completely new to SFP+ and actually to anything above 1BG/s (home user...)
But I'm wondering if something like this could actually route so much traffic... I do have 5Gb/s fiber connection coming from my ISP, could probably plug the SFP(+?) connector in there and find a direct attached or BaseT to go to my NAS/server and workstation...

But it somehow seems too good to be true... Especially with a Celeron 3855U... But with an i7 7600U ?

Do you think it can sustain over 5Gb/s from my desktop to my server or to my ISP (provided I manage to use this instead of my ISP modem if the ISP is nice enough to pass along the PPOE settings...) ?

Or is such a device realy not suited for such speeds and it exists only to have the SFP connector to be able to bridge longer distances in the industrial world but not to sustain high bandwidth ?

Thanks for any insight ;)