Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - ilikenwf

#1
Feel free to move this thread if it isn't in the appropriate area.

Just in case any of these would allow us to use a remote VPS to bind multiple connections' bandwidth together, I wanted to share them here. Right now the only/best way to do this is with a double nat setup, and an openwrt installation between opnsense and multiple wans.

This project is the most recent, but I think it is/was purely academic, though it may be enough for a start:

https://github.com/MPTCP-FreeBSD

Then, someone has forward ported the older MPTCP patch to FreeBSD 13:

https://github.com/RayGuo-ergou/freebsd-src/commits/mptcp-version13/

I realize routing and VPN connection considerations amongst others are required to do all this, however I think we already have all the other pieces, save for the actual MPTCP implementation, and a frontend and configuration method?

Here's a video from 2013 exhaustively outlining how it all would work:

https://www.youtube.com/watch?v=oNaMqWG6rqo
#2
I reverted to the 22.1 version of Unbound to fix this same issue, and after a reboot, it appears resolved.

For what it is worth, I'm using DNS over TLS, and have blocklists turned on. Otherwise I'm not sure beyond that what could cause this issue, but it is something that needs addressed for sure!

I'm now on unbound 1.14.0 from 1.15.0 and went from bad CPU usage to perfect behavior once again.
#3
I ended up adjusting ring size, and also am using stream dropping - most of the things I catch tend to be small and not involved in large streams of data...
#4
So, in IDS only mode, it can get up around 400-500 megabits, with only 325 or so rules enabled...

I find all of this troubling because someone else with a protectli i5 dual core on reddit mentions getting up to 600 down (maxing out his connection) with an older version of opnsense and suricata around 11 months ago.
#5
Interesting...it could be - or it could be some kind of thread config is needed to really get suricata running reliably on the machine.

The entire reason I use this machine (a corebooted thinkpad t430) is that it runs coreboot...I could virtualize on my rackmount vm hosts, but that defeats the purpose of my paranoia's love of coreboot.
#6
I'm using Hyperscan, with 46355 rules enabled...that may not be "not very many" I suppose, but it's a lot less than I ran without issue before the update and before the ISP upgraded me to gigabit from 300 down...

Without suricata disabled, I run 0-3% cpu most of the time, sometimes spiking if I do a speed test to 9%...

And yes, this is with all the hardware acceleration disabled on the NICs, as netmap still doesn't support any of it/there are bugs in some hardware and drivers...

I have tried with Aho-Corasick but it didn't seem to affect CPU.
#7
So this update to 21.7 coincided with the time when my ISP bumped me from 300 down to gigabit...

I'm using a 2nd gen i5 with minimal rules enabled, and even just monitoring LAN, I also get CPU spikes in the suricata process above 100%, so threads I assume...

My down speed suffers as a result, and I get 300-400 megabits down, versus bursting at/above 920 with suricata disabled.

With the hardware and config I have this really shouldn't be the case...
#8
Even if we had a dumbed down version of the box, it would be nice to have a way to add always_nxdomain entries for a few really enormously bad domains I block entirely, as well as their subdomains.

One example is online-metrix.net which is a port scanner, on all subdomains - the same one used by ebay. It exploits websockets to do this.

server:
local-zone: "online-metrix.net" always_nxdomain


I'll do the custom config file route for now but just wondering if anyone has a better way of doing this - nxdomain is my favorite though for blocking all subdomains and the parent domain as well.