Hello,
I am running latest opnsense 23.7.5 on APU6D4 pcengines board, everything working fine until I started adding more VLANs,
I did run speedtest from LAN -> WAN -> speedtest.net and I am getting my full bandwidtch from ISP with no issue, once I did start finishing up my setup and testing, it down by more than half
here is the setup in a nutshell:
* WAN interface connected to the modem "igb0"
* LAN interface connected to the laptop for setup "igb1"
* vlan0.1.[20-110] on "igb2"
* vlan0.2.[20-110] on "igb3"
* Assigning the new created VLANs
* bridge[0-9] briding two VLANs together for example. bridge0 will have vlan0.1.20 and vlan0.2.20
* DHCP setup on those bridge interface
* bridge interfaces configured with static IP
* All interfaces are enabled
Then I started running some tests, so I started to disable the assigned VLAN interfaces, and once I did that, I got back to the actual bandwidth , the bridges are still up but the "members" of the bridge from the assign tab, aren't enabled, however when I do ssh, I can see the VLAN interfaces up.
So I have a couple of questions:
- is this the best way to have two physical interfaces sharing VLAN data ? those interfaces will be connected to different access pointss that will tag the same VLANs, so I can't actually use LAGG / LACP since there is no switch supports LACP in the deisgn
- Is it okay to disable those interfaces? I know that now the bridge page on opnsense show nothing selected, so it might be a bug?
Thanks :)
Attached some photos to try to explain what I mean
Also forgot to mention, CPU seems to be doing fine ether way, might hit 80% when uploading traffic.
here are the rest of the screenshots
just to be clear, once the members are added to the bridge, performance goes down
Are you always testing with just two devices? And only adding more VLANs to the bridge? Or are you adding more devices as well?
It looks like you're trying to have OPNSense route inter VLAN traffic as well as cross VLAN traffic. Is there a specific use case for this? Why not use switches?
Quote from: CJ on October 10, 2023, 04:28:37 PM
Are you always testing with just two devices? And only adding more VLANs to the bridge? Or are you adding more devices as well?
It looks like you're trying to have OPNSense route inter VLAN traffic as well as cross VLAN traffic. Is there a specific use case for this? Why not use switches?
Just two devices for now, one connected directly to LAN no vlan port and other connected to VLAN port , just adding more VLANs, no more devices yet, I am still trying to figure out the best setup
So, I just want to go from device with VLAN outside , later I will have firewall rules to access some vlans from other VLANS, also I am getting way better performance if I joined the igb2 and igb3 into LAGG with LB instead of bridge , but still testing that, still nto sure what would be the best approach
Quote from: sherif on October 10, 2023, 04:34:20 PM
Quote from: CJ on October 10, 2023, 04:28:37 PM
Are you always testing with just two devices? And only adding more VLANs to the bridge? Or are you adding more devices as well?
It looks like you're trying to have OPNSense route inter VLAN traffic as well as cross VLAN traffic. Is there a specific use case for this? Why not use switches?
Just two devices for now, one connected directly to LAN no vlan port and other connected to VLAN port , just adding more VLANs, no more devices yet, I am still trying to figure out the best setup
So, I just want to go from device with VLAN outside , later I will have firewall rules to access some vlans from other VLANS, also I am getting way better performance if I joined the igb2 and igb3 into LAGG with LB instead of bridge , but still testing that, still nto sure what would be the best approach
Can you post a diagram? You didn't really answer my questions.
Based on my understanding of what you've set up, I'm not surprised that a LAGG performs better than a bridge, as you're offloading the inter VLAN routing to an actual switch instead of trying to force OPNSense to do it.
Quote from: CJ on October 10, 2023, 04:46:09 PM
Quote from: sherif on October 10, 2023, 04:34:20 PM
Quote from: CJ on October 10, 2023, 04:28:37 PM
Are you always testing with just two devices? And only adding more VLANs to the bridge? Or are you adding more devices as well?
It looks like you're trying to have OPNSense route inter VLAN traffic as well as cross VLAN traffic. Is there a specific use case for this? Why not use switches?
Just two devices for now, one connected directly to LAN no vlan port and other connected to VLAN port , just adding more VLANs, no more devices yet, I am still trying to figure out the best setup
So, I just want to go from device with VLAN outside , later I will have firewall rules to access some vlans from other VLANS, also I am getting way better performance if I joined the igb2 and igb3 into LAGG with LB instead of bridge , but still testing that, still nto sure what would be the best approach
Can you post a diagram? You didn't really answer my questions.
Based on my understanding of what you've set up, I'm not surprised that a LAGG performs better than a bridge, as you're offloading the inter VLAN routing to an actual switch instead of trying to force OPNSense to do it.
The switch doesn't do LAGG, there is only one single cable to the switch :) I didn't use LACP, but here is a diagram
I did try two tests " for vlans" one on LAGG LB device " switch connected only to 1 port and as far as I know LB doesn't require negotiation, not like LACP. 2nd test is VLAN on bridge
igb2--> vlan0.1.40--->bridge0
igb3--->vlan0.2.40--->bridge0
Also single cable connected
LAGG perferomed much better than bridges, but direct LAN with no VLAN performed best
Quote from: sherif on October 10, 2023, 05:02:21 PM
The switch doesn't do LAGG, there is only one single cable to the switch :) I didn't use LACP, but here is a diagram
I did try two tests " for vlans" one on LAGG LB device " switch connected only to 1 port and as far as I know LB doesn't require negotiation, not like LACP. 2nd test is VLAN on bridge
igb2--> vlan0.1.40--->bridge0
igb3--->vlan0.2.40--->bridge0
Also single cable connected
LAGG perferomed much better than bridges, but direct LAN with no VLAN performed best
I'm still confused about your setup and what exactly you're testing and trying to accomplish. Routing traffic through OPNSense will always be slower than just having it done in a switch. Depending on your hardware and setup, you may or may not notice this difference initially, but it will become more obvious as the amount of traffic and interfaces increases.
You mention that the switch is only connected with one cable but then you talk about multiple physical interfaces. Can you update the diagram to show the exact cabling and VLAN setup. Once that's done please provide a step by step process for how you're testing.
Currently I have no real idea what your setup looks like or how you tested it.
The diagram is up to date and is what being used for testing, however forget what I am trying to do and you tell me how do you design the following test case:
- You have 1 Router, with 3 ethernet ports running opnsense , one port is WAN "could be PPPoE or WAN to ISP modem, the other two ports are assigned for you LAN, one access point " will be TAGGING 10 VLANs " will be connected to one port, and a 2nd AP with the same 10 VLANS will be connected to the 2nd point.
- How do you configure those LAN ports / VLANs on opnsense?
A simple diagram attached for this test scenario
10 VLANs on each LAN port, 10 bridge interfaces ...
Or buy a cheap 5- or 8-port switch like "anything from Ubiquiti". If you pick a model with PoE you can supply power to your APs on the go.
Quote from: Patrick M. Hausen on October 12, 2023, 03:09:19 PM
10 VLANs on each LAN port, 10 bridge interfaces ...
Or buy a cheap 5- or 8-port switch like "anything from Ubiquiti". If you pick a model with PoE you can supply power to your APs on the go.
That's exactly what I did " for both suggestion " , I went down the route of creating 10 VLANs on each interface and then 10 bridges , each bridge will have same vlan from both interfaces, this performance was so bad! 50% loss of throughput / bandwidth.
Then we with having LAGG in LB mode "No switch with LAGG support", was better performance than the bridge setup but still 30% loss of throughput.
Then single interface with a switch as you mentioned, still was almost 20% loss...
Ended up re-flashing the APU with openWRT last night after few days of trying to optimise the setup, but I do need OPNsesne firewall, so might add that as a extra layer just to do firewalling and nothing else
I do not experience loss of throughput when I use a single trunk interface to a switch or an LACP lagg to a pair of switches with FreeBSD and VLANs. Something else must be misconfigured in your setup. I grant that the bridge approach might become a performance bottleneck if you create 10 or more bridges. For a single one, also no noticeable degradation.
All with 1 Gbit/s infrastructure. 10 Gbit/s might indeed bring FreeBSD to its limits.
Quote from: Patrick M. Hausen on October 12, 2023, 03:38:27 PM
I do not experience loss of throughput when I use a single trunk interface to a switch or an LACP lagg to a pair of switches with FreeBSD and VLANs. Something else must be misconfigured in your setup. I grant that the bridge approach might become a performance bottleneck if you create 10 or more bridges. For a single one, also no noticeable degradation.
All with 1 Gbit/s infrastructure. 10 Gbit/s might indead bring FreeBSD to its limits.
Network still way less than 10Gbit/s , I will have to source LACP enabled switch and try again , thanks for the support
I have this small desktop switch with Gbit throughput and interfaces and I just setup a new OPNsense installation - virtualised in bhyve, but network interfaces passed through.
What I see in iperf3 on my Macbook Pro to/from that OPNsense:
1. No VLAN, no bridge, OPNsense and Mac on same switch:
root@OPNsense:~ # iperf3 -c 192.168.1.214 -P4
[...]
[SUM] 0.00-10.00 sec 1.10 GBytes 947 Mbits/sec 147 sender
[SUM] 0.00-10.07 sec 1.10 GBytes 941 Mbits/sec receiver
2. Tagged VLAN on OPNsense, untagged on Mac, both on same switch:
iperf3 -c 192.168.1.214 -P4
[...]
[SUM] 0.00-10.00 sec 1.10 GBytes 945 Mbits/sec 0 sender
[SUM] 0.00-10.02 sec 1.10 GBytes 939 Mbits/sec receiver
3. Bridge on tagged VLAN on OPNsense, untagged on Mac, both on same switch:
iperf3 -c 192.168.1.214 -P4
[...]
[SUM] 0.00-10.00 sec 1.10 GBytes 945 Mbits/sec 0 sender
[SUM] 0.00-10.03 sec 1.10 GBytes 939 Mbits/sec receiver
Kind regards,
Patrick
Quote from: Patrick M. Hausen on October 12, 2023, 06:05:53 PM
I have this small desktop switch with Gbit throughput and interfaces and I just setup a new OPNsense installation - virtualised in bhyve, but network interfaces passed through.
What I see in iperf3 on my Macbook Pro to/from that OPNsense:
1. No VLAN, no bridge, OPNsense and Mac on same switch:
root@OPNsense:~ # iperf3 -c 192.168.1.214 -P4
[...]
[SUM] 0.00-10.00 sec 1.10 GBytes 947 Mbits/sec 147 sender
[SUM] 0.00-10.07 sec 1.10 GBytes 941 Mbits/sec receiver
2. Tagged VLAN on OPNsense, untagged on Mac, both on same switch:
iperf3 -c 192.168.1.214 -P4
[...]
[SUM] 0.00-10.00 sec 1.10 GBytes 945 Mbits/sec 0 sender
[SUM] 0.00-10.02 sec 1.10 GBytes 939 Mbits/sec receiver
3. Bridge on tagged VLAN on OPNsense, untagged on Mac, both on same switch:
iperf3 -c 192.168.1.214 -P4
[...]
[SUM] 0.00-10.00 sec 1.10 GBytes 945 Mbits/sec 0 sender
[SUM] 0.00-10.03 sec 1.10 GBytes 939 Mbits/sec receiver
Kind regards,
Patrick
Wish I could get this results, I will try again ! also one I added wan to be PPPoE , things went really bad, but that's also in openWRT, might be other MTU settings or something.
Thanks again
While I haven't run iperf, I am able to pull 900mbps through my Unifi 6 AP while sitting next to it. Obviously this drops off as I move away. I'm using a VLAN for SSID and untagged for the AP itself. It's on my list to move the AP to a VLAN as well but I haven't gotten that far yet.
Keep in mind that as you add more SSIDs on a single AP that performance will slow down. I know this is the case with Unifi products and I have to assume others will have a similar experience.
Is there a reason you're running so many VLANs and I prosume SSIDs through so few APs? If you're just trying to prevent clients from talking to each other you can turn on Client Isolation which would prevent traffic from being shared. Additionally, by using a managed solution you can have all of your APs hooked to a switch and spread the SSIDs as needed with VLANs.
Regardless, you need a switch, preferably something that will support link aggregation and VLANs. I would say to look into something with 2.5g ports so that you can avoid bottlenecking your clients but I don't know how many you're planning on having per AP and not every company has 2.5g APs yet.
Might be a little offtopic but maybe it's interesting.
While deploying Lancom Access Points, I have also come across the CAPWAP protocol. It's an IP in IP Tunnel, so the Access points are just in a Management VLAN, and share all information through that CAPWAP tunnel with the Access Point Controller. The controller then manages the breakout with a trunk port to all connected VLANs.
I think Sophos uses that protocol for their Access Points too. And Cisco too.
That way you can essentially just have a Firewall, connect a trunk to a switch, then a trunk to the Access Point Controller, and then put your APs anywhere you want without caring about VLANs.