Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - EHRETic

#16
Hi there,

I made a recent design change to allow the possibility to patch my network switches without interruption.
(https://forum.opnsense.org/index.php?topic=32211.msg155680#msg155680)

So my physical firewall has 2 NICs configured in failover mode in a LAGG, spread on 2 physical switches. So is my Internet router (yeah double NAT is not ideal, but I have no choice with my provider)
All the interfaces work is done via VLANs & different interfaces.

RTSTP is activated on switches so the 2nd link of the router is disabled if the switch number 1 is online.

If I power off or update the switch 1, Internet and all the other things continue to work "as expected", except my IPSEC tunnel to another failover site. When the switch come back online, it doesn't reconnect it.

I've tried to restart the IPSEC service, nothing will work unless I restart the firewall. Restarting the firewall or service on remote site doesn't help.

Any idea what could be the issue and how to solve this?

Thanks in advance for your help ;)
#17
General Discussion / Re: LAGG redesign question
January 31, 2023, 01:46:45 PM
@pmhausen thanks a lot, I think this answers my questions! :)
#18
General Discussion / Re: LAGG redesign question
January 31, 2023, 01:06:14 PM
Quote from: pmhausen on January 31, 2023, 11:52:38 AM
Failover is the only setting that might work.

Would you know what would mean "If the master port becomes unavailable, the next active port is used."?

Depends really on how the unavailability is defined. The physical connectivity might always be on in case of switch reboot and traffic interrupted anyway ;D

But I guess I'll to try no?
#19
General Discussion / LAGG redesign question
January 31, 2023, 11:31:14 AM
Hi there,

I've a question concerning my firewall NIC/LAGG design.

Up to now, I had a single switch (Ubiquiti) and I had 2 physical NICs configured in LACP on my OPNsense firewall. All interfaces were managed by different VLANs (including WAN connectivity)

But to ease the whole firmware patch management and offer redundancy on several systems, I bought a second switch.

Now, as Ubiquiti doesn't offer LACP on several physical switches, I'm wondering what is the best LAGG type I should now configure to have redundancy/a bit of load balancing between both links: would you choose failover, loadbalance or round robin?

My first reflex would be to go to loadbalance, but maybe there is a few things to consider before. Maybe a LAGG is not the best option at all.

Thanks in advance for your advices! ;)

PS: If required/better, I could add 2 physical NICs in the server (but from the load, it is not necessary at all)
#20
Quote from: pmhausen on November 09, 2021, 10:57:45 PM
I have started working on it ...

Lovely to hear that too, I just started to implement Wazuh in my lab and was figuring out why one of my FW was not able to send logs back to the server ;D
#21
Quote from: opnfwb on April 28, 2021, 10:11:50 PM
At the console check that both of these return '1':
sysctl hw.pci.enable_msi
sysctl hw.pci.enable_msix


Are you getting a faster internet connection in addition to the 10gbit NICs? Otherwise I'm not sure there would be much realized benefit here unless the ISP over-provisions beyond 1gbit?

I get 1 for both commands, so nothing more on that level. I have no plan to update CPU, but I'll have a look.

For the 10 GB/s, as I've several zones and one in particular is the multimedia that communicate with my Plex server on another zone, it might be good to have something "strong". Even if I go 4k in the future, 1GB/s will not saturate them but the 10 GB/s are also an Inteal NIC, which should be able to offload some things.

But I need to purchase 2 extra Twinax cables first! ::)

For the future, I also plan to put a proxy so kids can have filtered Internet, nut I'll try that in a VM lab first to see how it works and if I can integrate it with AD auth.
#22
Quote from: ggho on April 21, 2021, 08:09:07 AM
Bonjour,

Quel est le problème avec le fait d'utiliser des processeurs de 2013-2014 ?
Un i5 de cette période ferait largement l'affaire dans ton cas.

Un processeur plus récent, donc plus puissant, et fanless, c'est compliqué.

Je le rejoins un petit peu : on sous-estime souvent les besoins hardware pour profiter pleinement de sa ligne.

Moi perso, j'ai du migrer d'une VM qui tournait sur un Core i7-6800K avec 2 cœurs à un serveur physique dédié (Core i5-4570) pour avoir une vitesse à 500Mb/s sur une ligne d'1 Gb/s, avec la latence induite (CPU et disque surtout) de la virtualisation, ça ne décollait pas.

Et je n'y croyais pas avant d'avoir remarqué que ma VM était plus rapide sur le processeur le plus rapide (j'ai plusieurs hosts de générations différentes)

Mais après, ça dépend aussi de tes besoins concernant le firewall (vitesse, fonctionnalités).
C'est quoi ton projet ? ;)
#23
Quote from: opnfwb on April 15, 2021, 03:54:44 PM
If you're migrating from a machine with different interface IDs or drivers, you can edit the backup config .xml file and replace the old interface IDs with the new IDs on your destination firewall.

opnfwb, I really have to thank you a lot, my migration was flawless and super easy! It was done in literally 5 minutes (host was prestaged and updated) :)

It was so easy that I was quite suspicious about it!!! ;D

I can now get 900Mb/s+ of Internet in both directions and approx 850Mb/s with IPS/IDS activated.
I'll retry when I'll switch interfaces to the 10Gb/s cards to see if IDS/IPS would perform better.

Any other question: how can use the most my system? Memory is very little used:

#24
Thanks for that information, this would be very useful for migration! :)

I've a few small extra questions:
- Do you know if driver has any impact?
- So if I understood correctly, I just need to replace in config file the chain vmx0 by bge0 and vmx1 by bge1 and its done? Would be awesome.

Thanks again!
#25
Hi there,

After have tried to get more info to create an HA setup VM/Physical server, I abandonned the idea of the HA as it doesn't seems to be feasible. The only possibility would be to create a LAG to have the same interfaces on both side but it is apparently not really possible in a VMware VM (more info here: https://forum.opnsense.org/index.php?topic=21696.msg102192)

So I have an unused i5-4570 with 32GB or RAM (previous ESX in my homelab), storage is a USB stick (32GB)

2 main questions:
- what is the best way to migrate from one to the other?
Knowing that I've 2 interfaces in the VMs (a trunk with several VLANs and the WAN) and on the physical server, I have a 4x1GB/s Broadcom card. How is the interface assignement working after configuration upload?

- what kind of optimization/tweaks can I do to use the most of my hardware (esp. memory, which is overkill)?
You can suggest hardware modifications too...

PS: in a near future, I'll have a dual Intel 10 GB/s for this machine, is it recommended to create a LACP LAG with "VLAN interfaces" for all or to dedicate one for WAN and one for LANs?
(I've a 1 GB/s Internet connection)

Thanks for your suggestions! ;)
#26
Hi there,

Made a few tests, so host can really make a difference, my newest one has a CPU frequence of 3.4 Ghz (i7-6800) and on a VM that has a direct connection to Internet VLAN, I can reach speeds above 900MB/s.

The same VM using my firewall (both on same host), cut the speed in half.
I've also tried to add a dedicated interface in my FW, without VLAN tagging, same (bad) result.

I'm out of idea now. Any clue (please check also my previous post with test VMs)? Thx ;)
#27
Hi,

On my side, everything was solved when I changed my hosts NICs to 10GB/s, it never crashed again.
From an option point of view, I didn't change a thing, but OPNsense was also updated a few times since then.

Good you could fix it!  ;)
#28
Anyone? ;)
#29
Hi there,

So I made some tests, I've created some VMs, so we have (same host, same client machine)
- FW production : 2 VMXNET3 NICS, multiple VLAN interfaces, IPSEC tunnel and OpenVPN for some VPN clients, 2 vCPUs, 4GB RAM
- FW test 1 : 2 VMXNET3 NICS, single VLAN interfaces (no tagging), no VPN, 1 vCPUs, 2GB RAM
- FW test 2 : 2 E1000 NICS, single VLAN interfaces (no tagging), no VPN, 1 vCPUs, 2GB RAM

More or less, this is what I get (on averrage) from my test client on DSLreports:
- Prod FW: down: 500MB/s / up: 450MB/s
- Test FW 1: down: 700MB/s / up: 650MB/s
- Test FW 2: down: 780MB/s / up: 850MB/s

So, the differences between VMXNET3 and E1000 is quite thin on these test VMs, but if I compare both extremes, there is a huge speed difference...

Any help/advice to give before I try a physical machine? Thanks in advance ;)
#30
21.1 Legacy Series / Re: High availability project
February 23, 2021, 07:03:40 PM
Quote from: Voodoo on February 23, 2021, 01:27:08 PM
Make sure to disable hardware offloading which can create issues.

# Interfaces -> Settings: Check "Disable CRC, TSO and LRO hardware offload" and "Disable VLAN Hardware Filtering"

Yeah, that is the thing, I've already disabled all of this. Also, IPS is for the moment completelly off.

VM runs on a PowerEdge R730 with dual  E5-2620 and 128 GB of RAM (<50% used), Storage is a Synology on SSDs (iSCSI) and network is Ubiquiti 10GB/s switch.
VM is configured with 4 vCPUs and 4GB of RAM.

VM current config seemed like a "sweet spot" as more would not give more performance (you end up "CPU waiting") but less was definitivelly worse! :-)

The benchmark was made on the same physical computer as for the direct test, otherwise, it doesn't make sense.

And of course, I'm not against trying to tweak the VM... but with your statement, I'm wondering where is the issue now! ;D

Edit: I re-tested the connection speed accros the switches I'm using for virtualization, so the same computer, not directly attached to the box like the first test, but to it's switch. I got 800MB/s symetrical or a little more consistantly

So I'm really loosing the half through the FW. ???