Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - TrixieBell

#1
I changed all my block rules to reject and am still seeing this issue, I have throttled my Nessus scan down to a single host and TCP scans but it still grows the state table alarmingly.

Interestingly, if I scan from one subnet to another where there are no drop rules (only allows) it doesn't fill the table.

I was wondering, would it be worth setting either State Type to None or Max source states to a value (it says Maximum state entries per host which I think sounds like a great idea) on my drop rules?

#2
It sounds like the traffic on those 2 ports is going out via the native vlan1

Unless the device you are plugging into 11 and 18 are vlan aware and are setting the vlan tag on their own traffic then you probably want to remove both vlans from the ports and add back in only untagged vlan 3.

Keep the tagging on vlan3 for your uplinks and opnsense ports.
#3
I have noticed the log lines disappearing almost as soon as they appear. It doesn't do it for me just now but I think it happened when I had every packet being logged. I had assumed there was a limited cache of logs to filter and that those logs were scrolling out of the cache too fast.

I did like being able to search on DNS name when you put Lookup hostnames on. Being able to just type ESX and see the logs for all my ESX servers etc.
#4
I recently upgraded from an ancient OPNsense 20.something install to a modern 23.bleedingedge version and the only change I noticed in the Web GUI was that the search of the live logs went from a quick and easy broad search with regex to a clunky dropdown option.

I loved the other one. It was quick and easy to use, in my opinion one of the best features of the GUI (which in general I think is pretty great).

I'm not looking for replies, just letting people (devs perhaps?) know how much I miss it.

RIP nice search. :o(
#5
Possibly off topic but I thought perhaps it belonged in the Intrusion Prevention threads.

We use Nessus for vulnerability scanning, currently if I scan a subnet which is the other side of my OPNsense firewall it quickly fills up the state table on the firewall and I end up DOSing myself.

It doesn't seem to matter if I use SYN, UDP or TCP port scanning, I assume this may be related to block vs reject in my default rules?

The Nessus docs say -

"It may also be beneficial to review which port scanner your policy is using. While the SYN scanner is the default, and works well in most situations, it can cause connections to be "left open" in the state table of the firewalls you're scanning through. The TCP scanner will attempt a full 3-way handshake, including closing the connection."

But this doesn't seem to make much difference in my case.

Can anyone confirm whether changing block to reject might fix this or does anyone have any other suggestions or experience with this sort of issue?

Thanks.
#6
Hi Everyone,

I finally worked out enough of the kinks in my config for the dec2700 to put it into production, though I am currently only using the 1g ports (I think I have something configured incorrectly on my 10G switch but that's something a different post!).

Now what I really want to know is what performance tweaks I should do?

Hardware acceleration?
Hardware checksum offload?
Hardware TCP segmentation offload?
Hardware large receive offload?

Currently they are all off, as per the defaults, and on normal load its ticking along at about 2% cpu so I doubt any settings would after my real world performance but I just assume, since this is hardware spec'd and provided by OPNsense that some hardware acceleration should be possible!
I am seeing about double the ram usage that I had on my old OptiPlex setup but this still only adds up to 20% utilization so not concerning.
#7
23.1 Legacy Series / Re: Odd issues after upgrade
February 15, 2023, 09:00:02 PM
Having bounced this around a few people and talked it out I think I have worked out the 802.1x issue.

I had forgotten (and possibly missed off some documentation) the fact that on the WAN connection we have 2 vlans, the gateway for the user network lives on the WAN router (for redundancy in case of site wide failure to have DHCP on split scope across the WAN... I think) and splitting this out onto the firewall port meant that traffic to the gateway IP traverses the firewall. This was working okay for IP based traffic as the firewall was routing the traffic somehow but broadcast traffic was not working, hence DHCP etc. failed.

Still... doesn't explain my proxy issue.
#8
23.1 Legacy Series / Odd issues after upgrade
February 15, 2023, 08:07:27 PM
Hi All,

I upgraded from opnsense 20.1-amd64 running as a router on a stick on an old Dell Optiplex to a brand new shiny DEC 2700 running the bleeding edge production version, fully up to date and using 3 interfaces to route.

I did this by dumping the config and restoring it to the new box, changing the interfaces to be correct and that's pretty much it. Just about everything worked, it was routing and letting me access between VLANs on one interface and also routing down my 2 interfaces I split out (internet and WAN).

The only thing that seemed to be causing issues was the proxy->internet connection, I couldn't get any traffic out. It wasn't a route issue as I could SFTP out and VPN in so...

I enabled logging on all my rules and couldn't see anything blocked, in fact I could see DNS traffic being allowed from my proxy server but wasn't getting a response, i could also see traffic to the proxy and squid logs were showing the requests. I checked all the possible proxy and dns options I could find on the new box and nothing was enabled so I gave up and added a new floating rule, proxy can go anywhere any protocol both directions. Hey presto, internet worked, DNS was getting a response, happy days. I went home to bed.

I come in this morning and found 802.1x port authentication wasn't working for PCs (it was working fine for phones and printers though) and ram usage on the new box was sitting at 80% (which is odd as the new box has double the ram of the old one which sits below 20%).

I saw no drops on the rules (which were still logging from last night and I'm hoping was the cause of the high ram usage) and on the NPS server I see requests from all the computers except they aren't trying with their computer account and cert as they are meant to, the requests are coming in as mac address authentication...

Anyway, I reverted to the old box and everything is instantly okay.

Can anyone think of any reason, going from a dump of rules and settings on v20 and importing on v23 would cause these weird issues?