Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - JL

#1
SOLVED, thanks to the articles below,

Main reason assumed is MAC address are the same for all vlan (obviously) as the Debian page documents

auto LIF

iface lif inet manual
        bridge-ports eth0
        bridge-setageing 0
        bridge-stp off
        bridge-fd 1
        bridge-vlan-aware yes
        bridge-vids 401 402 901 1500
        mtu 1422


I'm not certain if the MTU reduction is required, should not matter much.

on the switch the port is now again set to General - accept tagged only

inside OPNSense the vlan interface has the MAC of the parent set, no other modification was made to the vlan interface
at this point i don't think that really matters

I also set this tunable to 1 : Select the tunable net.link.bridge.pfil_bridge and set the value to 1

https://docs.opnsense.org/manual/how-tos/lan_bridge.html#step-six


#2
Quote from: vigeland on January 04, 2025, 05:42:08 PMI have no deny rule on the Lan interface. There are only the standard 2 "allow all" rules ( IPV4 , IPV6 ).
And why does it work for X years with the rules only with the update not. Additionally others have similar problems ?
I've wondered for some time about OPNSense and if it is reliable to work with in all environments.

I'm using it primarily as a VM firewall. In that it seems to be 'not great but works'. 

One culprit is now using an OPNSense VM with a hypervisor bridge which has a physical interface with multiple vlan assigned. 

The hypervisor sets the PVID egress as untagged on the bridge and tagged for the vlans, which is as it is. However, the tagged vlan are visible as untagged inside OPNSense VM. And that's that. No docu pointing out what to do or not to do.

When using multiple vlan-id on a single bridge the only solution seems to be to create a bridge per vlan, which doesn't really make sense, but works.

#3
Quote from: Patrick M. Hausen on January 06, 2025, 09:08:24 PMYou can use PCIe passthrough of the network interface and let the OPNsense handle all of the VLAN stuff. No idea about your specific setup, I don't run OPNsense that way, sorry.
Thanks for the reply. I'm trying that now, escaped me to try this.

I had something like the my setup described working in the past but forgot to document it properly.

The issue is not unique apparently and a repeat iritation with user using opnsense in a VM with forced use of a bridge.

edit /

i did find this is likely a know issue with Linux
https://wiki.debian.org/NetworkConfiguration#Bridging_without_Switching

another article speaks about how ipv6 can cause a bridge to fail
more here https://wiki.linuxfoundation.org/networking/bridge

https://wiki.linuxfoundation.org/networking/bridge#no_traffic_gets_trough_except_arp_and_stp
#4
Hey, please think with me for a moment.
 
Using 24.10 in a VM hosted on a Linux VM server. The OPNSense VM is connected to a Linux bridge which simply passes all (tagged) vlan from the interface connected to the switch.

Observation: vlan traffic is seen on the physical interface and bridge with the vlan tag present, the switch only offeres tagged vlan
Problem: inside the VM though, the traffic seems untagged since it is not observed on the vlan0.401 interface for example
Validation: when connecting another VM to vlan0.401 the communication works well
Question: how to fix that the vlan tag from the hypervisor bridge is passed to the opnsense vlan interface from the parent

Linux Bridge config looks like below, the opnsense vlan is attached to the parent which has assinged the bridge interface on the hypervisor host.
---
auto LIF
iface lif inet static
        bridge-ports eth1
        bridge-stp off
        bridge-vlan-aware yes
        bridge-vids 401 402 901 1500
        
For one other interface the bridge has a 1:1 mapping like, this works well since the vlan is not "inside" the VM

auto DIF
iface dif inet static
        bridge-ports eths3.700
        bridge-stp off
        bridge-vlan-aware yes
        bridge-vids 700

I'd prefer to 'pass through' the interface to the VM but this can only be done over a bridge, leading to the current problem situation.

Br,

JL
#5
Quote from: peterwkc on November 27, 2024, 09:23:29 AMDear all, I had installed opnsense quite long time ago but recently my LAN cannot browse internet. Most probably my ISP has hacked my router. (Dont' argue this). 

Im looking method to harden my OPNSense router. Please suggest. Thanks.

Harden OPNSense Methods:
1. Disable SSH services
2. Disable root user in web gui option
3. WiFI based on MAC Address only
4. Installed Suricata IPS
5. Disable boot into single user mode to prevent hacker change password
6. How to enable sudo?

Hire a professional, you are not making much sense and show a lack of objectivity without providing sensible evidence.

#6
Because I've spent extraordinary time on finding out since I don't understand the context (yet)

Enabling syncookies blocks some site but not all, this is particulary so for sites hosted by sucuri.net such as linuxmint.com and reportedly also facebook sites. Which makes one wonder what's the common factor here.

Solution is two-fold. Disable syn cookies are set them to adaptive. When setting to adaptive use a value of start of >= 50% but <100%

Hope this helps.

br,

Joris
#7
Missing the point goes two ways so it seems. Yeah, i'm also sleep deprived and known to be grumpy, yet helpful I can only hope.

The distinct impression the platform (opnsense obviously since it is the title topic) is faster was/is a compliment. Pages loading faster is indeed a generic statement since both the UI and surfing is perceived to be generally (far) more responsive.

"Lack of consitent behaviour" is the observation a working config does not mean it works reliably for the next few hours, in part it is sure to be me. I'm also not being very specific to OPNSense but point at the dumpster fire Ubuntu in combination with OPNSense administration. For OPNSense it is an impression that just does not go away. It seems states are not purged instantly on commit (apply changes) and linger for some time until the new config is actually in effect.

Paired with the absolute madness Ubuntu is projecting with the latest LTS when it comes to networking on the desktop this past impression does not go away easily. Having worked with Juniper, CheckPoint, Cisco, Meraki, WatchGuard and other odd number of firewalls I hope I'm not simply making weird. I'm stating an observation which is not easy to pinpoint either.

The Suricata issue's with every set-up that does not stick to MTU=default (1500) I think to have observed this is actually from 1500-2048 MTU size. I've shared my fix with support, posted the fix here, nothing happened with that information. As if everyone seems to know already. Yet, forums are plenty with people observing the start/stop of Suricata.

The fix is literally one value in tunables, adding this as a default would save people loads of time.
Fix = Tunable: dev.netmap.bufsize = <highest MTUsize here>

People like me who among other things manage firewalls and don't have time to make a dedicated life out of firewall administration for one product.

Regarding GeoIP. I meant, what use is adding this if it is not displayed anywhere in the logs. Right?

OH
now I read the business edition has that GeoIPdb on-board already, so the logs ARE void of GeoIP info and no way to add a column to look this up. TLDR on that one, could have known. The 'full help' function in OPNSense could be useful to inform.

For example: checking the Alias i created for Belgium i don't see IP registered for Belgium, such as 193.191.245.121
When i do a lookup reference it tells me it is Belgium but the list of subnets listed only shows 0 values.

By sharing the GEOIP link the collateral finding of 'URL tables' which I'm sure to explore, thanks for that, a new feature to test.

Either way, OPNSense is affordable and good enough for my purpose.
I'm sure people who can dedicate a considerable amount of their time will master to build larger and complex networks with it.

#8
a bad week, karma, or something entirely different, who will tell

things to know for working with 24.x

foremost: know KEA DHCP requires for the GATEWAYIP/MASK to be set, not the subnet/mask as the GUI help suggests. Though Kea is interesting it seems sub-par in features compared to ISC and also seems less configurable. Curious for what will come of this.

There is a distinct impression the platform is just faster, pages load in an instant

Unbound is not behaving as well as can be again, enabling DNSSEC is not recommended at first. Just leave it off. My assumption is an update may fix whatever is going on. If not, let me know what I did wrong.

Pairing Unbound with DNScrypt can be a headache. Just point only the "query forwarding" to the DNSCrypt service, don't combine this with "DNS over TLS" from unbound, stuff breaks here :-D Also, Unbound has its own visual dashboard.

Writing rules 'feels different', could be me paired with a lack of sleep.

I've also had the weirdest experience with getting the network to actually work properly. This not in small part due to Ubuntu 24 LTS, it is not recommended to upgrade just yet. Something seems alive in there and it is cheeky and mischievous.

Somehow "automatic outbound NAT rules" gave up on me a few times. I had to switch to hybrid mode. I mean, is there a ghost in the machine or what ?!?  This makes a requirement to hide each network separately behind the WAN address. Including the WAN network it seems.

There quite a few small and notable changes and improvements. I'm actually getting curious about opnsense again. Though I do think there's too much awkwardness for casual use it's growing on me. I'm sure to try out the central management features.

There something different about how gateways are managed, not sure what, seems too easy now ?

There's something odd about "Dynamic gateway policy", do i need it enabled or not, the change does not seem to propagate or act consistently over time.

Lack of consistent behavior seems to be growing trend with open source software lately, it is quite concering. Settings were saved but were not, flows seemed to pass until they did not. The mess I've seen Ubuntu make of simple things is just disengious. OPNSense seems to suffer from "ghost states" and can sometimes use a reboot. Not recommendable to accept rebooting as a standard practice.

What I miss profoundly are a way to add exceptions to the bogon filtering. Now it has to be disabled because it matches DHCP and disrupts that. There appears to be some things missing here, maybe a feature deprecated ?

Suricata borks again, I hope I find back the post on how to keep it up and not have it randomly crash. It is stupid this is not documented or fixed in the build. Yes, all hardware offloading is disabled. Oh wait, yes, Suricata stops when the MTU is not set consistently for all interfaces. If you change the MTU updated it here. At least that used to work. Now it reports there is an "<Error> -- opening devname netmap:vtnet1/R failed: Invalid argument" I don't see why and did not find why yet. Oh no it does bork for all interfaces, with the same invalid argument argument. Let's disable IPS mode that worked last time, until i remember the fix.

Just in case, here is how i fixed it last time https://forum.opnsense.org/index.php?topic=38140.0

Why is GeoIP under Firewall Aliases ? Why it is documented so vaguely what URL to point at it?
Works though. I think, can i actually see the GeoIP info anywhere ?


In all, it's okay to work with. It is a building block rather than a one stop toolkit.
Reminds me I have to get Elasticsearch up and running again, pair it with Grafana and stuff.




















Setting up DNScrypt requires little but you should know what.


#9
is there an error message visible ?
#10
Having run Suricata on OPNSense with virtio since many years I do not have such issue.
The internet line is 100Mbps but the lan is set to 1Gbps.

Here's my best guess.

Don't try and "tweak" network drivers, this is overwritten in many cases due to limitations in the driver(s) and such.
Yes, especially not with e1000 and related cards.

I've had long time issues with Suricata uptime until I finally had some time and fixed the MTU by setting dev.netmap.bufsize to the relevant MTU value.

Also consider evaluating the MTU for the bridge interfaces. To my understanding it is best to have large MTU for Gbps networks.

With some Linux you (at least this used to be so) may need to tweak sysctl settings to allow for large transfers.
I assume you've checked that ?

Also, are there any console message visible ?

Wrong MTU size for example will throw an error like: netmap_buf_size_validate error: large MTU (8192) needed but igb1 does not support NS_MOREFRAG
#11
check my post here (both IPS and IDS are working now)
if need be I can help out debugging this issue[/size]https://forum.opnsense.org/index.php?topic=38140.0the main issue of Suricata failing or not failing are MTU inconsistenciesThere's a typical overhead (8 bytes for Windows / 22 bytes for Linux) to consider but bridges and ppp also add overhead.So, if you start with the default MTU of 1500 (1518) or have  jumbo frames (<=9000 MTU) this will have great effect.I can say with confidence this approach works. Suricata is now up 100% of the time since 24 hours.[/size][/font]
#12
don't try tuning network cards


unless somehow the driver is very broken and the system deadlocks there should be output in /var/log for causing the freeze

did you try inserting a different network card ?

bnxtload suggest this is a server network card with multiple nic ?

since you mention the ZFS pool, is this by any chance opnsense running in a VM ?
#13
check my post here (both IPS and IDS are working now)

https://forum.opnsense.org/index.php?topic=38140.0

the main issue of Suricata failing or not failing are MTU inconsistencies

There's a typical overhead (8 bytes for Windows / 22 bytes for Linux) to consider but bridges and ppp also add overhead.

So, if you start with the default MTU of 1500 (1518) or have  jumbo frames (<=9000 MTU) this will have great effect.

I can say with confidence this approach works. Suricata is now up 100% of the time since 24 hours.
#14
Please like or share a comment if this post is helpful or you have more questions.


This how-to-fix post to inform people on how Suricata crashes with OPNSense on Proxmox (any version) can be remediated. The advisories here may not be suitable for production environments, I trust you know this already.

SHORT FORM specific to Proxmox


set all bridge interfaces used for opnsense to the same MTU
(it may be required to set the bridge-if MTU to the physical inteface MTU-22)

use the opnsense VM intefaces used for suricata only with virtio network adapters
set the network adapter MTU to 1 to adopt the bridge MTU from proxmox
in opnsense, leave the MTU for the interface blank
in opnsense, leave the MTU for suricata blankin opnsense for Suricata keep the MTU blank and disable promiscuous mode
in opnsense for Suricata set the exact network masks configured for each interface, it may help to add remove networks to match the interfaces enable for Suricata
add the tunables:
> set net.devmap.bufsize to the value display for NS.MOREFRAG or to the MTU value of proxmo (trial and error)
> set the net.devmap.ad to 1
> set the ns.morefrag to the same as for net.defmap.bufsize
reboot the VMsuricata should now be stable


Context


VM-hardware has Q35 chipset and uses virtio network interfaces.
The OPNSense host has qemu-guest-agent installed.


Indicator (console output)
Jan 28 12:39:45 opnsense kernel: 385.664273 [2197] netmap_buf_size_validate error: large MTU (8192) needed but igb1 does not support NS_MOREFRAG

Assumption
This indicates MTU inconsistency when MTU is set >1500 on the bridge and this is 'broken' in-between the bridge and the IPS. To my understanding the network interfaces available on Proxmox are well supported by OPNSense.

For non-virtualised systems the issue may be the same. Check the MTU of the network, match the MTU of the network on the physical interfaces. Consider subtracting 22 from the MTU for compatibility.

Recommended is to check if
MTU on the bridge is >1500

configure : within Proxmox

check and set the VM-hardware network-interface(s) to 1 so these adopt the MTU of the connected network.
you can consider decreasing the MTU with 22 (now named PMTU)


configure : within OPNSense


[ for Suricata] under the 'advanced' section of the IPS service : check and/or clear default packet size (MTU) setting
setting the MTU here can affect detection reliability and 'drop' or 'conflate' frames on inspection, consider setting MTU-22


[ for Interfaces ] check and/or clear MTU settings for the monitored interfaces OR recommended is to set the PMTU as value
important know that on non-enterprise network cards there may not be support for 'real' Jumbo frames which permits MTU >1500


Look up the specifications for the network interface cards (NIC) and do not set the MTU higher than the hardware supports, even if the MTU on the connecting switch is set to a much higher value.


[ for SYSTEM: SETTINGS: TUNABLES ] manually create the key dev.netmap.bufsize with value = <PMTUvalue>
this to work around issues with some NIC where MTU is not working well, so hard-set it here with this key


configure : optionally for OPNSense


[ for SYSTEM: SETTINGS: TUNABLES ] manually create the key dev.netmap.admode with value = 1this to avoid flapping between native and emulation state for the network interface


[ for Suricata] you must consider set the MTU-22 as size for stability


Considerations

when the value for the MTU is cleared for an interface this defaults to 1500
consider this may severely impact IPS performance and/or accuracy

Resources

https://docs.opnsense.org/manual/ips.html
https://man.freebsd.org/cgi/man.cgi...eBSD+12.1-RELEASE+and+Ports#SUPPORTED_DEVICES
https://man.freebsd.org/cgi/man.cgi?vtnet
#15
original comment removed

this requires modifying the suricata.yaml file to include the correct sections for the mentioned App-Layer protocols which are missing, this is a best practice since the behavior will change in the future and the protocols will no longer be auto-enabled


"This behavior will change in Suricata 7, so please update your config"



if you have not tweaked the suricata.yaml file, consider looking for a suricata.yaml from a more recent versions


check if these sections are present as such in suricata.yaml, consider adding them at the appropriate place


#- dnp3
        - dcerpc
        - ftp
          #- ikev2   
        - krb5
        - nfs
        - rdp
        - rfb
        - sip
        - smb
        - snmp
        - tftp
        - dhcp:
           ......



    # Note: parser depends on Rust support
    ntp:
      enabled: yes


    dhcp:
      enabled: yes


    sip:
      enabled: yes
    http2:
      enabled: yes
    snmp:
      enabled: yes
    rfb:
      enabled: yes
    mqtt:
      enabled: yes
    rdp:
      enabled: yes