Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - toxic

#31
Hello,
============
[Edit] : Seems I solved my issues by simply adding a routing table and adding a rule to say traffic from 10.0.10.0/24 uses this routing table that only has a default route to the gateway.
I just need to make sure that survives a reboot now... But that's realy a debian question then and not anymore a networking issue...
============

I'm realy new to networking it seems since it took me a while to understand why my ssh connection is dropping off, in fact, my client is going through the gateway but the return packets are coming directly since the server knows of a more direct route.

Now I could cut off the direct route alltogether but in fact I like this route in case my gateway goes down, not that opnSense is unstable but it's actually a VM that I sometimes shutdown...

So the server has these routes for now :
# ip route show
default via 10.0.10.1 dev vmbr10 proto kernel onlink
10.0.10.0/24 dev vmbr10 proto kernel scope link src 10.0.10.9
10.0.11.0/24 dev vmbr0.11 proto kernel scope link src 10.0.11.9
10.0.30.0/24 dev vmbr0.30 proto kernel scope link src 10.0.30.9


and that's true for all 3 last routes : I would like for the default route to be preffered to the last 3 "direct" routes since in fact when the 10.0.10.1 gateway is up it will work just fine and as you see it breaks some things to keep the other routes when gateway is up... ( that's because my client has a 10.0.30.0/24 IP and is contacting the server on his 10.0.10.0/24 IP, so client to server goes through the gateway and return trip is direct since servers already lives on 10.0.30.0/24, but that bypasses the gateway and the next packets are then dropped since the TCP state has been killed seeing no traffic...

I think there is a "weight" mechanism, but not sure how it would indeed detect that the 10.0.10.1 gateway is down...

Any hep in setting up this debian(proxmox) server to always prefer the gateway over the other known routes would be greatly appreciated, info on how gateway status is evaluated is also welcome !

And sorry if you feel hurt I ask debian-like questions on opnsense forum, that's where I usually find the most useful networking help ;)

Thanks in advance,

Regards

Edit: looking up route weight, it seems it's not what I need... In fact, I want some failover of routes... can we change the routing table if a CARP VIP is free for example ? In fact, vmbr10 will never be down since it's a bridge with a virtual link to the gateway, and physical to the failover gateway... but both gateways that are fighting for the CARP VIP might be down (with my skill in opnSense that happens more often than I wish, and then this direct route is my last resort to access proxmox and rescue the situation...)
#32
Ok, got this working, I had 2 issues in fact :


   1/ Getting the bond working with vlan-aware in proxmox
        - create bond of all phy interfaces you need
        - create vmbrX having this bond as slave
        - attach the opnSense VM to vmbrX to access all VLANs
        - attach VMs to vmbrX.Y with Y beeing the VLAN
        - all traffic works, from pfSense to vmbrX, to phy devices with loadbalancing, no issues !
        - (what I did was attach vmbrY to bond0.Y and that is wrong but can almost work, it fails in subtle ways...)
   2/ Getting the Synology NAS to use more than layer2 hashing for his bond, because with a gateway inbetween, layer2 will always result in the hash putting all the traffic to the gateway on the same phy device regardless of the IP of the destination device...


Right now I'm still facing some issues but I think I understand now :


What happens is that I can almost never use 2 windows laptops to achieve 2GB/s downloading from the NAS, because I changed the bond on the NAS to use level3+4 for the hash algorythm.

Now, the 2 laptops are on a 10.0.30.x while the NAS is on 10.0.11.x so level2 is not enough for the hash since all trafic is going to the gateway anyway, so same level2 MAC address for both file downloads...

But level 3+4 will loadbalance more than the 2 flows in question for this download, and even SMB is maybe using several tcp connections... So even if level 3+4 have different results, the hash will have to chose between only 2 outgoing phy interface on the NAS, inevitabely there will be a lot of collisions and a lot of traffic that will have to share the same phy.


But with this setup, playing around with VMs downloading from the NAS using iperf, it then depends on the port I choose, but I am reliably able to avoid the collision by launching iperf client downloading on one machine then launching iperf client on another machine and if collision (bandwith drops on the first iperf), then I cancel and try again on the same machine with another ports on the server, keeping the first iperf running. Rince and repeat changing ports, at some point you'll get lucky and see 2GB/S. It's not always the same port combinaison even for the same IPs, but I always find one that works.


In this manner, it takes some time but I can always find a way to get 2GB/s from the NAS !

(In fact, I even got 2BG/s downloading from the NAS allthewile uploading 2GB/s to the NAS ! I was not aware but apparently a 1GB/s NIC can do 1 down and 1 up at the same time !)


So I was hoping that I could find a way to more reliably get 2GB/s downloading from the NAS to 2 devices...


In fact, I don't have any single device that can exceed 1GB/s by itself, so level3+4 does not make sense in this case I think.

My thinking now is that since most of my file download will be from devices in 10.0.30.x downloading from the NAS in 10.0.11.x, if I change the hash algorythm to level2+3 on the NAS, the level2 will always be the same, that is the MAC of the gateway, but level 3 will always be different because each device on my net has it's own IP and my gateway is not doing NAT between my LANs. So I hope there is a higher chance that with level 2+3 I can achieve 2GB/s more reliably... But in fact, layer 2+3 are already different right now every time, just unlucky with hash collisions when downloading using windows explorer and not able to control the port it uses... I got it working once or twice but I had to go on the NAS and kill the existing connections, hoping that the new one would result in a different NIC beeing selectedgiven the new srcPort in level4...


But in then end I also see that with so many IPs and only 2 physical NICs, there will be a lot of "hash collisions" and the logic is apparently not able to see that a specific physical NIC is overloaded and some tcp connections could benefit from changing to the other NIC...

So I'm not really hopefull with changing to Level2+3 for the NAS bond...

That's disappointing to see 1 NIC on the NAS beeing overloaded and the other one just hanging around doing nothing while both windows laptops struggle to get more frames than the other...


That's quite a disappointment to now understand how LAGG works... Will not be putting more NICs into my devices... Can't wait for 5GB/s to catch up in the home market, sadly even 2.5G is not even there...


#33
Hello,

I'm getting desperate, I need help to find a setup where 2 windows clients can download files from my NAS using SMBv3 both at 1GB/s at the same time for a total 2GB/s sent from the NAS...

I've tried a lot of things and got LAGG working between lots of parties achieving 2GB/s several times but never for end-to-end from laptop to NAS.

What I have :

  • Box of my ISP recieving fiber internet 5GB/s but having only 3x1GB/s ethernet ports to use it (esentially max 3GB/s then...)
  • NAS Synology DS918+ with 2x1GB/s
  • Switch TP-Link SG1024DE 24 ports but only static LAGG up to 4 ports per group no LACP
  • Router/FW core i5 8th gen 32GB RAM with 6 integrated I211 intel NICs that is as proxmox VM host for my web services and runs opnSense as a VM
  • Backup router/fw J1900 8GB RAM and 6 I211 intel NICs with bare-metal opnSense (since no VT-d so no pci passthrough, can become proxmox backup if LAGG is done in proxmox more than opnSense)
  • 2 windows laptops plugged on the switch
  • a Wifi AP and lots of other crap connected on the Wifi or switch but these I can manage ;)

What I want :

  • for the 2 laptops to be able to use the windows share on the NAS at full speed at the same time
  • not buying anymore hardware

If you have ideas you can stop reading here and propose it ;) If you have time I'm now gonna tell you what I tried that did not work...

What I would like but can compromise on :

  • only my firewall get access to the ISP LAN side and serve as gateway/firewall
  • ability to use at least two 802.1Q VLANs so my SmartHome things can be forced into one VLAN by the AP (I know how to do it)
  • my firewall(s) should see all traffic coming and going to the NAS (I have loging enabled on all rules to/from the NAS and analyze them with Splunk)
  • Be able to use 2GB/s of my ISP traffic (split into several clients of course)
  • ability to access the proxmox host even when it's main router VM is down for backups for example (backup router could be the gateway if proxmox still has some network access while it's opnSense VM is down/paused)
  • have the core i5 be the primary router and not the physical J1900 box that I could one day repurpose since backup router for high availability is not really a big issue, let's focus on 1 router, I'd like it to be the fastest computer to do more than routing... I like running traefik in docker in LXC on the proxmox host...

I'm at a point where I'm considering the simplest setup that fullfils almost none of the optional wishes : one flat network for all LAN and each firewall a bridge to the ISP(wan) network with CARP VIP. That would work, but I'd be mostly blind in splunk as to what my NAS is doing for my LAN...

What I've tried, focusing on the primary router running in proxmox :

  • "Router on stick" setup where proxmox holds all 6 NICS into 2 bonds (max 4 ports per LAG group on switch) and passes virtual 10GB/s NIC(s) to the opnSense VM : 1 pass 1 VLAN-aware Linux bridge to the VM and setup the VLAN tagging in opnSense. The WAN VLAN had it's own virtualNIC due to LAG limit at 4 ports
  • proxmox holds all the NICs into 2 bonds again, but create Linux bridges in proxmox for each VLAN, then pass 1 10GB/s NIC by VLAN to opnSense VM
  • PCI pass-through of 5 of the 6 NICs to opnSense VM handeling

    • 1 big LAGG of 4 ports to client VLAN and NAS VLAN, one physical port to WAN
    • 1 LAGG of 2 ports to LAN VLAN, another LAGG of 2 ports to the NAS VLAN, one port for WAN
  • PCI pass-through of all 6 NICs to opnSense VM, played again with various LAG groups configs
  • PCI pass-through of 4 NICs (LAGG 2-2, 1-3 or even 4 ports and then VLAN on top of LAGG) plus a "router on stick" setup on the 2 remaining ports bonded in proxmox

In almost all these config, I do get full 2GB/s speeds on several legs of the network (almost all):

  • from the 2 clients running iperf client (normal and reverse mode) to proxmox running iperf (when no PCI pass-through)
  • from the clients again running iperf client to the opnSense VM running iperf server
  • from the opnSense VM and running iperf client (twice) to the NAS in normal mode
  • iperf client running on opnSense VM and a client or proxmox, to the NAS running iperf server
  • NAS running iperf server, opnSense VM running iperf client, and one laptop I plug in the switch on same VLAN as the NAS, bypassing the gateway

In all these cases I am able to get 2GB/s, except...
Cases 4 and 5 show where it breaks : opnSense is able to SEND to the NAS at full 2GB/s, but running iperf client with --reverse, I don't get the full 2GB/s, both iperf clients only add up to 1GB/s...

So I never got to my goal at the very top of this post : 2GB/s from clients to NAS through opnSense...

Any help or idea would be greatly appreciated !

Thanks a lot for your reading and help !
#34
Hello, I did a nice setup and I was expecting that my 2 windows clients could get full gigabit speeds using SMBv3 to my NAS, and somehow they still share the bandwidth, I don't get why...

TLDR: Loadbalancing on a 2 NIC Group without VLAN from router to NAS through the switch is working fine. But Router has another LAG group of 3 NIC to several tagged VLAN on the switch, but on the same VLAN, 2 clients on 2 different ports share a single GB/s to the router despite the router having LAGG...


My setup is :

  • TL-SG1024DE 24ports Gigabit Switch (no LACP...)
  • synology NAS with 2 plugs on static LAGG group on VLAN 11 (untagged&PVID11 on the switch)
  • OpnSense router with 5 NIC : 2 plugged in a static LAGG group on VLAN 11), 3 in another static LAGG group with several VLANs on it, but let's focus on VLAN 30
  • Client A Windows laptop plugged in on the switch in a port set to VLAN30 untagged and PVID 30
  • Client B Ubuntu laptop plugged  in on the switch in a different port also set to VLAN30 untagged and PVID 30

Both client A and B get a proper IP from router R on the VLAN30, each individually is able to copy files using SMBv3 at 100-110 Mo/s when it's the only one do to so.
But when both client A and B are trying to download a file using SMBv3 at the same time, their combined speed never gets to 2BG/s, it stays at 1GB/s only.

2 strange things happens that tell me somehow I'm not so dumb and LAGG is not too badly configured :
- When I plug client B on a different port on the switch that is directly attached to VLAN11 like the NAS, I can indeed hit the NAS from client A and B at the same time using SMBv3 and total bandwith seen from the NAS is 2GB/s => so SMBv3 protocol is not the issue and is fast enough to show that I use LAG to get past 1GB/s is working properly
- Now stranger even : I started an iperf server on the NAS, and when I use iPerf as client on the opnSense router, using --reverse, I am able to get 1GB/s bandwith, and while it is running, I am able to use client A to start a SMBv3 download and also get 110Mo/s at the same time, so the nas realy sees at this time that it has 2BG/s of traffic outgoing ! So LAGG between router and NAS is indeed working and balancing the load allowing 2GB/S if more than 1 client.

Now, last leg that could be not working is the LAG from my router to my switch on the 3 port group to all different VLANS... And indeed : when I start iperf server on the router, use client B as iperf client --reverse, I do see 1GB/s, but I see it drop when client A uses the router also, for example to download the file from the NAS (client A is windows I don't have iperf on it...)

So the issue seems to be that my switch is not able to loadbalance to the 3 port LAG group I setup for connectivity to the router on all my VLANS.

Now, I do use CARP so my router which is the default gateway is beeing contacted throuth the CARP IP and not it's real IP, but I doubt that could be a factor for this LAGG to not work.

I setup the 3ports LAGG pretty much the same way as the 2ports one, only adding VLAN :
- lagg0 type LOADBALANCE on igb2, igb3 and igb4 (somehow the opnSense GUI in the OtherTypes->LAGG show the same MAC address for all these... but is does the same for the 2 port LAG that is working)
- created a butload of VLAN on lagg0
- assign "vlan 30 on lagg0" to lan or opt1, static IPv4

Did I missed something ?

Thank you in advance for your help and thanks a lot anyway for your reading !

If you read this far, you probably want some details :
ifconfig -a : (I left out several lagg0_vlanXX that looked the same anyway)

igb0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c2
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
igb1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c2
hwaddr 00:40:d7:e0:09:c3
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
vtnet0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=800a8<VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,LINKSTATE>
ether 10:10:10:10:10:01
inet 10.0.10.2 netmask 0xffffff00 broadcast 10.0.10.255
inet 10.0.10.1 netmask 0xffffffff broadcast 10.0.10.1 vhid 10
inet6 fe80::1210:10ff:fe10:1001%vtnet0 prefixlen 64 scopeid 0x3
groups: GR_LAN_Servers
carp: MASTER vhid 10 advbase 1 advskew 0
media: Ethernet 10Gbase-T <full-duplex>
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
igb2: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c4
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
igb3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c4
hwaddr 00:40:d7:e0:09:c5
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
igb4: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c4
hwaddr 00:40:d7:e0:09:c6
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
enc0: flags=0<> metric 0 mtu 1536
groups: enc
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x8
inet 127.0.0.1 netmask 0xff000000
groups: lo
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
pflog0: flags=100<PROMISC> metric 0 mtu 33160
groups: pflog
pfsync0: flags=41<UP,RUNNING> metric 0 mtu 9000
pfsync: syncdev: lagg0_vlan9 syncpeer: 10.0.9.3 maxupd: 128 defer: off
groups: pfsync
lagg0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0 prefixlen 64 scopeid 0xb
laggproto loadbalance lagghash l2,l3,l4
laggport: igb2 flags=4<ACTIVE>
laggport: igb3 flags=4<ACTIVE>
laggport: igb4 flags=4<ACTIVE>
groups: lagg
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=800028<VLAN_MTU,JUMBO_MTU>
ether 00:40:d7:e0:09:c2
inet6 fe80::240:d7ff:fee0:9c2%lagg1 prefixlen 64 scopeid 0xc
inet 10.0.11.2 netmask 0xffffff00 broadcast 10.0.11.255
inet 10.0.11.1 netmask 0xffffffff broadcast 10.0.11.1 vhid 11
laggproto loadbalance lagghash l2,l3,l4
laggport: igb0 flags=4<ACTIVE>
laggport: igb1 flags=4<ACTIVE>
groups: lagg GR_LAN_Servers
carp: MASTER vhid 11 advbase 1 advskew 0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan30: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan30 prefixlen 64 scopeid 0xd
inet 10.0.30.2 netmask 0xffffff00 broadcast 10.0.30.255
inet 10.0.30.1 netmask 0xffffffff broadcast 10.0.30.1 vhid 30
groups: vlan GR_LAN_Clients
carp: MASTER vhid 30 advbase 1 advskew 0
vlan: 30 vlanpcp: 2 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan3 prefixlen 64 scopeid 0xe
inet6 2a01:e0a:336:6ea0:240:d7ff:fee0:9c4 prefixlen 64 autoconf
inet 192.168.1.3 netmask 0xffffff00 broadcast 192.168.1.255
inet 192.168.1.2 netmask 0xffffffff broadcast 192.168.1.2 vhid 3
groups: vlan GR_WAN
carp: MASTER vhid 3 advbase 1 advskew 0
vlan: 3 vlanpcp: 2 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
lagg0_vlan9: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan9 prefixlen 64 scopeid 0xf
inet 10.0.9.2 netmask 0xffffff00 broadcast 10.0.9.255
inet 10.0.9.1 netmask 0xffffffff broadcast 10.0.9.1 vhid 9
groups: vlan GR_LAN_Servers
carp: MASTER vhid 9 advbase 1 advskew 0
vlan: 9 vlanpcp: 7 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan11: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan11 prefixlen 64 scopeid 0x10
groups: vlan
vlan: 11 vlanpcp: 3 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan22: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan22 prefixlen 64 scopeid 0x11
inet 10.0.22.2 netmask 0xffffff00 broadcast 10.0.22.255
inet 10.0.22.1 netmask 0xffffffff broadcast 10.0.22.1 vhid 22
groups: vlan GR_LAN_NoAccess
carp: MASTER vhid 22 advbase 1 advskew 0
vlan: 22 vlanpcp: 1 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan40: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan40 prefixlen 64 scopeid 0x12
inet 10.0.40.2 netmask 0xffffff00 broadcast 10.0.40.255
groups: vlan GR_LAN_Clients
vlan: 40 vlanpcp: 2 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan50: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan50 prefixlen 64 scopeid 0x13
inet 10.0.50.2 netmask 0xffffff00 broadcast 10.0.50.255
groups: vlan GR_LAN_Clients
vlan: 50 vlanpcp: 0 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan60: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan60 prefixlen 64 scopeid 0x14
inet 10.0.60.2 netmask 0xffffff00 broadcast 10.0.60.255
groups: vlan GR_LAN_Clients
vlan: 60 vlanpcp: 1 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan1 prefixlen 64 scopeid 0x15
inet 10.0.1.2 netmask 0xffffff00 broadcast 10.0.1.255
groups: vlan GR_LAN_NoAccess
vlan: 1 vlanpcp: 1 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan8: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan8 prefixlen 64 scopeid 0x16
inet 10.0.8.2 netmask 0xffffff00 broadcast 10.0.8.255
groups: vlan GR_LAN_Servers
vlan: 8 vlanpcp: 0 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lagg0_vlan99: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 00:40:d7:e0:09:c4
inet6 fe80::240:d7ff:fee0:9c4%lagg0_vlan99 prefixlen 64 scopeid 0x17
inet 10.0.99.2 netmask 0xffffff00 broadcast 10.0.99.255
groups: vlan GR_LAN_NoAccess
vlan: 99 vlanpcp: 0 parent interface: lagg0
media: Ethernet autoselect
status: active
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>


[EDIT] :
I noticed that I don't have any VLAN untagged on this LAG group in the switch, and as such I also had not assigned the lagg0 interface to any optXX. So I thought maybe lagg0 is not "properly enabled", I added an OPT11 to assign lagg0, enabled it but left both IPv4 and IPv6 to "None", and I also did that for a vlan I declared on lagg 0 but never got to use.... So I'm posting the full ifconfig -a just in case...
#35
General Discussion / Re: VLAN on Bridge
March 15, 2021, 01:14:09 PM
Thanks a lot. This will force me to create more bridges that I would have liked but thanks for the information! Somehow I couldn't find it somewhere else.

But in fact that will allow me to bridge together the VLAN ID10 on the LAGG to the vNIC interface without any VLAN since my vNIC does not really need VLAN, all of my VMs should be on the same network, only bridged to my VLAN10 to be on the same network as my physical servers...

Thanks again.
Best regards.
#36
General Discussion / VLAN on Bridge
March 15, 2021, 09:44:36 AM
Hello,
I will setup about 10 VLAN on my opnSense firewall, and I'm not sure if I should declare the VLAN on the physical NIC, the LAG or the bridge level.

To be clear, I want to use
-  3 1gb/s NIC as LAG to my switch for bandwidth increase between several clients, servers and VLANs.
- 1 10gb/s virtual NIC to my VMs

There will be a VLAN that should live across those 4 interfaces.
I was thinking I should bridge the lag and the vNIC into an big LAN and then create a VLAN on the LAN Bridge.
But I could also create the VLAN on the LAG and on the vNIC and then create a bridge of these 2 VLAN.

I could even VLAN on each NIC and then LAG the VLANs and then bridge...

Not sure if it's clear for you, hope you can provide insight on what should be the best way for performance.

Since I plan to create several VLAN, creating them directly on the bridge I see the only advantage that it will require less declaration in VLAN, only one for each VLAN instead of 3 for each VLAN (LAGG, vNIC then bridge them)

Thanks in advance for your kind help.
#37
Thanks for your answers !
I'm kinda out of my depths with the Virtual IP thing as I don't get why there's a password in there for example... I guess I just have to try it out and see how it looks in the GUI one I added a virtual IP.

But as it seems you can't easily understand what I'm trying to do I realize that maybe I myself am not clear what I want and I'm trying to use features for things they are not designed for...

I'll have to take a step back and think again.

Seems VLAn is the way to go but with most devices not supporting it natively I need to have the switch do these things, plus separate wireless SSIDs, and very fast the number of networks grow and the maintenance work grows with it...

In short, it all centers in applying different fw policies by "class" of devices, for me they are my networking stuff, servers, PCs, media/gaming, smart home devices and finally CCTV.  That's already 6 VLANs and at least 3 of them have devices both wired and others wireless, so 3 SSIDs... That's getting quite complex for me, especially since I don't mind them contacting each other most of the time...

That's why I was thinking of only one network, all configuration being common, and only a few rules that would apply to a big address range for wich my DHCP would assign devices a static lease in the proper range based on their MAC. So wide open network but devices get an IP in a specific range/subnet by their class...

But maybe that's not the solution... And maybe I need to use VLAN and floating rules for the rules in common, NAT rules can I think also apply to several interfaces... Will still have to find a way in openWRT to assign a VLAN number for each SSID...

That still looks like a lot of work and something when I feel lazy I tend to want to revert to everything directly off of my ISP router and throw my FW away... After all it's just home networking. And the next day when I'm less lazy I think about keeping these unsafe smart home things protected... I think you've given me what I need to know, now I need to find what I want to do and am willing to maintain over time ;)
#38
Thanks for your reply, I was indeed not sure but I get that VLAN is the proper way to do this.

Nevertheless, in fact it really is not a big problem that the switch is handling it without the router seeing my internal traffic.
I'm mostly interested in doing policy routing so my servers use a VPN when using internet while clients use another one...

So I'm still quite interested in finding out how to have several subnets on the same OpnSense NIC without doing VLAN...

Thanks in advance for any help you can provide, and sorry to disappoint, I'm knowingly going back from segregated netwos to home grade networking with little security, a tradeoff I need to make for simpler administration ;)
#39
I'm trying to understand the consequences of getting rid of my current VLANs on my home network.
I'm not really in need of real network isolation like the VLAN offers and it's too maintenance heavy for me to manage VLAN IDs port by port, and device by device for those who support it, so I'm trying to simplify and see what I'll loose.

Let say I put this in place :

  • OpnSense router with LAN NIC set to 10.0.0.1/16
  • Router is only connected to a L2 switch, all other devices are connected to the switch
  • Server has static IP 10.0.1.2/24
  • Client has static IP 10.0.2.3/24

My current understanding is that when the client contacts the server, 10.0.1.1 beeing outside  it's subnet it will route the packet to it's gateway, the opnSense router will recieve the packets, apply all FW rules, and if it's allowed will send the trafic along to the server.
I think that the switch will not be able to forward to the server directly without the router having seen the trafic since the client itself will have attached the mac adress of it's gateway to the packet... Am I correct here ?

I'm also still looking into a way to actually put my clients into a smaller subnet by DHCP as most of my clients use DHCP, even some servers do but I could switch them to static IP.I'm stil unclear how to, on the same NIC on the router,  have several subnets coexisting even if only one of them has DHCP enabled. Best of the best would be to be able to specify the subnet of each device in the static DHCP entry...

Any advise on how to deal with several subnets on a single NIC is welcome ;)

Thank you in advance.
#40
General Discussion / Re: Any new on FessBSD 13 ?
March 01, 2021, 05:26:26 PM
Thank you very much! So early next year, I'll live with 1GB/s for a while still...
opnSense has nothing to do with it but it's still frustrating seeing over 100GB/s in the DC while at home WiFi is overtaking copper for speed... Will test with linux anyway because I fear USB NICs have a bad reputation for stability anyway, if it's still the case I'm still years away from home networking for under $1k and faster than gigabit...

Thanks anyway and sorry for my rant;)
#41
Hello,

Does anyone know when opnSense will likely upgrade to FreeBSD 13 ?

I'm told the coming FreeBSD 13 will bring support for my USB 2.5G NIC using RTL8156B so I'm quite keen to see if USB NIC can indeed provide me with a speed boost...

I'm not looking for commitment but rough estimates from someone who is more used to opnSense dev cycle since I'm still very new at opnSense. like end 2021 or not before 2022...

So if anyone has any idea when I can expect to see the opnSense update propose me an upgrade that will move me to BSD 13 and give me that driver, I'm all ears !

Otherwise I'll just patiently wait ;)

Thank in advance, !
#43
I'm trying to find a simple way to apply a firewall rule to a range of IPs.
Say my FW interface is set to 10.0.0.1/16 and client A is 10.0.1.55/16 and client B is 10.0.2.55/16
If I add a FW rule to apply on source 10.0.1.0/24 will it match traffic from client A and not from client B ? Or will it not match traffic from client A since netmask is different?

I'm trying to find a reasonable way to apply FW rules to a range of IP, maybe that's na option somewhere else that I didn't find yet.
Thanks in advance for your kind help.
#44
Thanks a lot for the reply!
So I guess with suricata disabled this would still greatly outperform my current routeur which has only 1GB/s NIC...
Not a big fan of SFP for the home I fear it'll be years before I can profit of this on more devices but I haven't found a similar device with 2,5GB/s or 5GB/s using home standard RJ45 in similar prices...

Thanks again, have a nice day.
#45
I just came across this today :
https://www.alibaba.com/product-detail/OEM-OPNsense-Pfsense-firewall-hardware-mini_62100677126.html?spm=a2700.wholesale.deiletai6.1.76201d58MNU2sm
Pricing on such devices looks real, another one here :
https://www.alibaba.com/product-detail/POE-mini-pc-intel-3855U-CPU_62015422259.html?spm=a2700.wholesale.deiletai6.5.5e906d6chvJ1Gm

I'm still completely new to SFP+ and actually to anything above 1BG/s (home user...)
But I'm wondering if something like this could actually route so much traffic... I do have 5Gb/s fiber connection coming from my ISP, could probably plug the SFP(+?) connector in there and find a direct attached or BaseT to go to my NAS/server and workstation...

But it somehow seems too good to be true... Especially with a Celeron 3855U... But with an i7 7600U ?

Do you think it can sustain over 5Gb/s from my desktop to my server or to my ISP (provided I manage to use this instead of my ISP modem if the ISP is nice enough to pass along the PPOE settings...) ?

Or is such a device realy not suited for such speeds and it exists only to have the SFP connector to be able to bridge longer distances in the industrial world but not to sustain high bandwidth ?

Thanks for any insight ;)