Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - glubarnt

#1
Moin,

danke für deine Antwort.

Ich hab das Problem inzwischen gelöst.
Wie immer ist man selbst Schuld und sieht den Wald vor lauter Bäumen nicht.

Hatte die VMs und auch die Workstation zum testen in zwei Netzen gleichzeitig, per dhcp.
Da setzt Linux mir also zwei default gateways. So konnte es nicht klappen.

Sind Quell- und Zielhost nur in einem Netz, klappt alles wie erwartet. Routing funktioniert, Firewalling auch.

Der Thread kann daher zu.
#2
Ich hab OPNsense jetzt das Interface an dem die vlans getagged sind direkt durchgereit und die vlans so in OPNsense angelegt.

Das Problem bleibt aber dasselbe.

Für mich sieht es so aus, als ob Routen fehlen. OPNsense legt ja aber selbst welche an, wenn ich die IP am Interface setze.
Die Firewall blockt auch nicht, das kann ich im Log sehen.

Hat niemand eine Idee oder fehlen evtl Infos?
#3
German - Deutsch / Umstellung auf VLANs macht Probleme
February 12, 2021, 04:09:37 PM
Moin zusammen,

ich hoffe es ist nicht schlimm, wenn ich mein Thema hier nochmal auf Deutsch poste.
Habe hier schon auf Englisch mein Problem geschildert: https://forum.opnsense.org/index.php?topic=21423.0

Falls es nicht okay ist, gerne den Thread auf Englisch schließen.

Ich füge hier nun auch noch mehr Details hinzu.

Also, worum gehts:

Aktuell habe ich ein Netzwerk mit nur einem Subnetz und möchte dieses nun aufteilen in mehrere vlans mit unterschiedlichen Netzen.

OPNsense (Version: OPNsense 21.1.1-amd64) läuft als VM auf einem KVM Hypervisor (CentOS 8 ).
Die VM hatte bisher zwei Interfaces:
1. wan -> ist ein Interface das komplett vom Hypervisor durchgereicht wird und hängt am Vodafonerouter
2. lan -> ist ein Interface an br0 des Hypervisors.

Das hat bisher auch wunderbar funktioniert. Internetzugriff funktioniert und Kommunikation untereinander auch.
Der ganze Kram hängt an einem Mikrotikswitch.
Ich hab ein kleines Diagramm angehängt, welches den Aufbau grob zeigen soll. Da sind jetzt nicht alle Geräte drauf, aber für die Beschreibung soll es reichen.

Ich hab nun also die benötigten vlans getagged und vlan 1 (das default vlan) weiterhin untagged an die Geräte übergeben.
Am Hypervisor sind bridges auf vlaninterfaces angelegt, also so:

eth0 -> eth0.12 -> br12

OPNsense hat dann ein Interface auf/an dieser Bridge.
Diese wurden jeweils mit einer statischen IP versehen und aktiviert. Gateways stehen auf "auto", Routen wurden von mir keine angelegt. OPNsense hat aber selbst welche angelegt.

Die Kommunikation innerhalb eines vlans klappt auch.
Zum Test hab ich mir daher das vlan 12 an meine workstation getaggt und ein entsprechendes Interface angelegt.
Ist dieses up, kann ich OPNsense, den Hypervisor und den DNS-Server auf ihren IP's in vlan 12 pingen oder z.B. auch per ssh zugreifen.

Nehme ich jetzt das tagging an meiner Workstation wieder weg, geht das nicht mehr. das vlan tagging scheint daher aus meiner Sicht korrekt zu sein bzw. zu funktionieren.

Ich verstehe nun allerdings nicht, warum OPNsense nicht zwischen den vlans routen mag.
Um die Kommunikation testen zu können, hab ich auf den vlan-interfaces von OPNsense any/any/any-Regeln angelegt. Die würde ich natürlich noch gegen wirkliche Regeln tauschen, aber vorher muss der Kram erstmal funktionieren.

Ich kann auch z.B. von OPNsense kein traceroute vom vlan12-interface auf eine IP in vlan13 machen.
Hab ich da was vergessen oder muss ich doch noch statisch Routen oder Gateway anlegen?
Mein Verständnis war jetzt, dass das Routing quasi von alleine klappen sollte, da OPNsense ja Routen angelegt hat. Zumindest laut Webgui.

Vorab schon mal vielen Dank für Tipps und Hilfe.
#4
I cannot even do a traceroute from one vlan to another. Tried that in the webui of OPNsense.
Just getting timeouts there.

To be sure: do I have to create any routes or gateways by hand for this to work?

I assigned the interfaces, gave them IPs and left the rest on default.
OPNsense created some routes automatically.
#5
Yes, I see packets.

I use plain kvm for virtualization, running on CentOS 8.
Just libvirt, virsh and virtmanager.

I attached a small network plan.

So, if the interface for vlan12 on the workstation is active, I can ping the hypervisor and the dns server on that network. This is not visible in the fw log, as it does not pass the fw afaik.
If I take that interface down, I cannot ping them anymore. The icmp packets for this ping are visible in fw log. They are allowed, but they do not seem to reach the target.  I can still ping opnsense on the ip 192.168.12.1, but nothing else in vlan 12.

The test machine is just there to have a machine in vlan 13 to play around with. I can ping the gateway from test01, but no hosts in other networks.
#6
Hi,

I am in the process of migrating from a flat network to a segmented network and have some problems with opnsense (or the network) which I do not understand.

The current state of the my network is:
one subnet (192.168.11.0/24) in one vlan (the default vlan on my switches).
This works.

The plan is to have multiple different vlans, e.g. for servers, clients, iot-devices, the usual.

I use OPNsense as a my router fand it runs as a VM on KVM (CentOS 8).

So, I tagged the necessary vlans on the needed ports, while the default vlan is still passed untagged, so that users do not notice anything while the network gets migrated.
I added bridges on top of vlan-interfaces on the KVM-host which are used by OPNsense. OPNsense should be the gateway in all the new networks.

Lets take two networks as an example:

vlan12: 192.168.12.0/24
vlan13: 192.168.13.0/24

OPNsense has an interface in each. vlan12 uses static-ip's for clients, vlan13 uses dhcp for clients. OPNsense always has the first ip in the network.
Just to make it clear: OPNsense does not really know about the vlans, they are handled by the KVM-host. OPNsense has just normal interfaces.

Now here comes my problem:
Clients on the same network can reach eachother, to different networks, I get nothing.
I initially thought I messed up the vlan tagging. But to test that, I tagged vlan12 to my workstation and gave myself an ip in that network and I can reach clients who have a ip in that network just fine. Note: my workstation of course still has an IP in the default network.

I am really scratching my head here, but I think I forgot something really obvious.
After I assigned the IP in OPNsense, I created basic firewallrules on the interfaces.

The rules are basically any->any->any rules.
Those are not for security. I just wanted to make sure the connection even works first, before implemeting actual rules.

Anyways, from the fw livelog, I can see that OPNsense does not block my packets. It seems like OPNsense does not route between the networks. But the routes seem to be right.

I cannot even ping between the networks. Clients inside the same vlan can ping each other just fine. Other services like ssh work fine too inside the same vlan.

Any help, idea or bump in the right direction is greatly appreciated.
#7
So,
after setting kern.random.harvest.mask to 511 and observing the load of the system for a few days, I think this is solved now.
Without serving any traffic the system sits at 0%-8% load, which is fine for me.

Thanks a lot for your help.
#8
I agree, because of some bug, Q-35 does not seem to be possible at the moment.

I think I am gonna leave this machine running for some time and play around with the kern.random.harvest.mask.
#9
Thanks a lot for the additional input.

Quote from: Gary7 on February 02, 2020, 08:55:50 PM
Since the process rand_harvestq is using a significant amount of CPU, you could investigate changing the value of "kern.random.harvest.mask".
The default setting in OPNsense is kern.random.harvest.mask=2047

root@OPNsense:~ # sysctl kern.random.harvest
kern.random.harvest.mask_symbolic: UMA,FS_ATIME,SWI,INTERRUPT,NET_NG,NET_ETHER,NET_TUN,MOUSE,KEYBOARD,ATTACH,CACHED
kern.random.harvest.mask_bin: 000000000011111111111
kern.random.harvest.mask: 2047

The UMA (universal memory allocator also called zone allocator) has a potentially high rate. I don't know if UMA acts any differently on a VM vs hardware.

You could determine if CPU load decreases when lowering kern.random.harvest
kern.random.harvest = 2047    OPNsense default
kern.random.harvest = 1023    don't use UMA
kern.random.harvest = 511      FreeBSD default
kern.random.harvest = 351      max throughput according to some documentation that I found

You can set it using sysctl in a shell for testing, but I found to set the value permanently, I have to use the GUI: System -> Settings -> Tunables and add kern.random.harvest.mask

Disclaimer: If any of my information is incorrect, please correct me.

Good Luck

I tried the tunable and I think it also brought the load down a little bit more, but reading this comment:

Quote from: allebone on February 03, 2020, 02:22:48 PM
What machine type and nic driver type are you passing to the vm from KVM?

I looked up what machine type I created and by accident, it is a i440FX machine, which is not what I want.
The nic gets passedthrough, driver is virtio.

Anyways, as this is not a system that serves any traffic yet, I am gonna go ahead and resetup, because migrating i440FX to PC-Q35 is nasty and I do not want to mess it up.

I will keep you posted on the status.

Thanks again for the input and ideas.
#10
Hi,
thanks for your tip.

To my surprise it actually helped a little.

The load goes from 0% to 50% now, according to the dashboard.
That is better, but still not great for a system not serving any clients yet :D
#11
Hi everyone,

I am in the process of migrating from a dd-wrt-Router to a virtualized setup with OPNsense.

I installed it three days ago with 19.7, updated today to 20.1.
As I mentioned, OPNsense runs as a VM. The hypervisor is kvm on CentOS 8. CPU is a AMD EPYC 7282.

OPNsense gets two cores and 4GB ram.
It also gets one nic via virtio, which is a bridge. Another nic is passed directly from the hypervisor to the vm via macvtap.

Directly after the installation I noticed that the cpu was at 100% constantly, which left me wondering, because the system was not doing anything.

Looking at top on the shell, I see that unbound constantly has 6-10% cpu-usage and python 6-8%.
All other processes are below 0%.
Still, CPU has 36-45% user load, 30-45% system load and 5-40% idle.

Where could that load be coming from?

Looking in the webgui, I see a command "[rand_harvestq]" hovering around 30-50% CPU, while [idle{idle: cpu0}] is at the top most of the time.

Any clues what could the problem could be here?

Any help is much appreciated.