Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - bachmarc

#1
In the past on a linux-based firewall I had a more complex setup that was shielding my job (videocalls) against all the kids in the house that create traffic/bursts.

In Opnsense I thought to mirrow this but I found that the Codel stuff could do more than just keeping a fast track for my work. So initially I just shaped the traffic to avoid bufferbloat and postponed the further ideas.  It seems to work somehow, because I get an A rating in the "waveform buffbloat test" with the shaper and a B without.

I have a typical German asymmetric connection 500mbit down/ 50 mbit up.
There are things that make me suspicious about the shaper and as I cannot find a real documentation all ends with guessing.

a) I loss a significant part of my 500/50 with a switched on shaper. It differs but 430/42 are good days.
That feeds the idea that even a stupid throttle would reduce presure on router and deliver such stable ping times.

b) The kernel throws this over limit messages and I cannot get a context.


My setup is fairly simple:
2 Pipes
- 500mbit for down
  method: FlowQueue-CoDel, (FQ-)CoDel ECN enabled, quantum 1600, limit 1600
-50 for upstream
  method: FlowQueue-CoDel, (FQ-)CoDel ECN enabled, quantum 160, limit 160

3 Queues all (FQ-)CoDel ECN true
Down Prio 1
Up prio10
up prio 1

4 Rules
ACK, DNS i push through the prio Queue
all the rest stays in default queues

I followed a basic tutorial and there are things that confuse me but do not get explained, like why nobody enables CoDel active queue management. Seems to be not the obvious or the UI has a terrible naming convention.

If you could bring some light in this, I am happy

Marc




 
#2
It seems nobody really knows how to configure it right.. we all replicate the same steps like apes?

:(
#3
The dirty things you need to do with encrypted traffic to get this ability is exactly what I would never do because I am afraid that I make things worse if I mess up SSL chain and try to be the more clever man in the middle...

I thought about all that but I think I open hells gate and more over I violate essential rights of people in the network, I try to protect.

A thin bridge to walk on in EU even if people know better how-to then I poor guy do

Marc
#4
Quote from: ddutch206 on December 09, 2022, 04:17:47 PM
Honestly the reason was laziness. I knew I didn't want to create a bridge network, but did want the internal interfaces to have the ability to access each other. Coming into the WAN I only have 4 ports defined, everything else is closed down.

If you apply a default "allow every protocol from every source to every target" to each interface, then all can talk to each other with an ease... no bridge needed, this is handled inside.

The only reason against it: if you anyway allow any subnet to reach everybody else, I see no point in subnets at all. Technically a bridge and one subnet would provide the same with a few lines in systemd.networkd or bridge command.
Just would cost you the fun in OPNS ;)

Marc
#5
I am really not an expert for OPNS and have to admit, I never touched floating rules at all..
There maybe others that can point to the root cause but I would simple define rules per interface.

I created rules per interface and this does exactly the job: I can adjust who can reach which subnet from which side.

it may all be simpler, faster, cooler with floating rules BUT I never read the documentation behind and found it not to hard to start with by interface rules to get it run...
I thought floating is to logically group interfaces and apply similar filters across that groups. This sounds not like what you want and it is obviously not fast in your case ;)

is there a special reason for that uncommon subnet masks? I like it to match with decimal IP numbers and the dots...
Marc 
#6
Hello,
where ever I search I get only a "click by click" documentation of shaping that shows how to do it for a monkey but is not explaining the why and "what if"

the kernel tells with dmesg:
fq_codel_enqueue maxidx = 801
fq_codel_enqueue over limit

Ok, and now the clicky docu is at an end :(
I cannot find any meaningful docu between level "the algorithm - scientific papers" and "click here and do not ask, monkey"...

Crazy things like "do not check: Enable CoDel active queue management" are totally unclear to me.
googling my warning finds nothing but guessing

I beg for a nice link to a good docu

Marc   

#7
In older days (Kids smaller, no grand parents in the house) it was my server... now it is THE server and I am just Mr Responsible aka Admin=> suddenly I have to announce changes and maintenance windows ;)

=> I will have to wait for a while to avoid a riot in the house. Maybe situation gets clearer around 22.7.9

#8
If only I would use it... I am sorry: I do not  :-[

STOP: It seems suricata is installed... was simply not aware of that my IPS is suricata...
I read several other posts about it, but none sounds like what I experience, isnt it?
#9
Hello,
I have a virtualized opnsense with a few subnets that was running clean so far.... now I have successfully installed
OPNsense 22.7.9-amd64
FreeBSD 13.1-RELEASE-p5
upgrade... well almost successfully.

My interface vtnet1 is bound to a subnet 192.168.111.0/24 => my lan cable stuff in the house.
But after a while the clients lose the connection to the net and switch to WLAN *.*.112.0/24.

All this happens quite quietly... the DHCP clients lose their lease, if I assign a static IP I still can't ping the server in the basement. I can't reach the server from the 112 subnet either.

But the gateway on *.*.111.1 is reachable and also the interface vtnet1 is active according to ifconfig.
The WebUI looks completely normal...all services are running.

Ok, the dhcpd log never gets to the DHCPACK :( but I see requests coming in...and offers going out.
Ping to the server does not work

In the end i can't get anything in the WebUI.... tried:
- restart services
- de/activate interface
-Filtering rules of the firewall are on swipe for everything

Restarting OPNSense or restarting services with option 11 in SSH bring my *.*.111.0 network back up. Until it silently dies again after a while....

I come from Linux and apparently BSD is quite different... I can't find a hint in a log what dies and why.

I asked here in the german forum where to find more technical hints in BSD, unfortunately I didn't get any hints.

Now it ran for one day and then the WLAN went away.

The kernel of the host on which the OPNsense guest is running suddenly throws:
brsolnetwlan: received packet on enp8s0f1 with own address as source address (addr:d2:57:d1:5c:59:4f, vlan:1)

Shortly after that the LAN was gone...
I went to the server in the basement and restarted the Opnsense services with option 11: Tada! Works again, without reboot, without changes to the hypervisor host, without touching the cabling. The host kernel reports no more errors.
Unfortunately probably only until tomorrow...

I was tired of it now and reset the VM to the state before the upgrade 22.7.9. There it ran super stable.
Is now of course extremely annoying if you can now no longer make an upgrade without having to hear the grass grow afterwards, because the system subtly fails somewhere.
I would have liked to make a bug report but: "something wrong" uses small and I do not know BSD well enough.

Nevertheless I wanted to let you know that something does crazy things inside 22.7.9

Regards Marc
#10
So,
jetzt lief es für einen Tag und dann ist das WLAN weggewesen.

Der Kernel des Hosts auf dem der OPNsense Gast läuft wirft plötzlich:
brsolnetwlan: received packet on enp8s0f1 with own address as source address (addr:d2:57:d1:5c:59:4f, vlan:1)

Kurz drauf war auch das LAN weg...
Bin dann in den Keller zum Server und habe vor Ort die die Services der Opnsense mit Option 11 neugestartet: Tada! Geht wieder.... bis morgen.

Ich war es jetzt Leid und habe die VM auf dem Stand vor dem Upgrade 22.7.9 zurückgesetzt. Da lief es super stabil.

Ist jetzt natürlich äußerst ärgerlich, wenn man jetzt keinen Upgrade meh machen kann ohne hinterher das Gras wachsen hören zu müssen, weil das System subtil irgendwo versagt.

Ich hätte ja gern einen Bugreport gemacht aber: something wrong nutzt kleinem und ich kenne BSD nicht gut genug

Gruß Marc

#11
Hallo,
ich habe eine virtualisierte Opnsense mit ein paar Subnetzen, die bisher sauber lief... jetzt habe ich erfolgreich 
OPNsense 22.7.9-amd64
FreeBSD 13.1-RELEASE-p5
upgrade gemacht... nunja so fast erfolgreich.

Mein Interface vtnet1 ist an ein Subnetz 192.168.111.0/24 gebunden => meine Lan-Kabelzeug im Haus.
Nach einer Weile verlieren die Clients aber die Verbindung zum Netz und steigen aufs WLAN *.*.112.0/24 um.

Das alles passiert ziemlich still und leise... die DHCP clients verlieren ihren lease, vergebe ich eine statische IP kann ich den Server im Keller trotzdem nicht pingen. Ich kann den Server auch nicht vom 112er Subnetz erreichen.

Das Gateway auf *.*.111.1 ist aber erreichbar und auch das Interface vtnet1 ist aktiv laut ifconfig.
Das WebUI sieht völlig normal aus...alle Services laufen.

Ok, das dhcpd log kommt nie bis zum DHCPACK :( ich sehe aber Anfragen ankommen...und offers rausgehen
Ping auf den Server geht nicht

Am Ende kann ich im WebUI nichts erreichen... habe versucht:
- Services neustarten
- Interface de/aktivieren
-Filterregeln der Firewall sind für alles auf Durchzug

Ein Neustart der OPNSense oder ein Neustart der Services mit Option 11 in SSH bringen mein *.*.111.0 Netz wieder hoch. Bis es dann wieder lautlos stirbt nach ner Weile...

Ich komme von Linux und offenbar ist BSD doch recht anders... ich finde nicht einen Hinweis in einem Log was da wegstirbt und wieso.

Ideen was da Probleme machen kann und wo ich das sehen kann? Das Problem ist leider extrem diffus, damit kann man nicht in google suchen.

Gruß Marc