Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - cditty

#1
Thanks Tomj,
I will verify if Netflow is coming in. Can you tell me when it was unreadable (before the plugin) was the reported source something like 172.17.0.1? That is what I am seeing and that is the Docker bridge gateway.
#2
Look into keepalived, it goes the VRRP route, but it might work for load balancing opnsense instances.
#3
I just tried this too and it seems to work: ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime I restarted the container and it was sticky.
#4
For the timezone, I created a new user, assigned the role "admin" and set the timezone for that user. That seemed to do what I needed.
#5
So, this is working for me, but I am still seeing "garbage" coming across. Is anyone else seeing this? Most of my logs are clean, but there is still 10% that is not human readable. It is not affecting my logging, but it bothers me that I don't understand what it is, and I worry that maybe it is a sign of something that needs tweaking.
#6
I have just started and I have not progressed very far, but I do have things mostly working. Here are the things that I have done:

I have 3 Dockers installed, I used Portainer to install Mongo 4.2 and ElasticSearch 7.17.3. Their setup was simple enough:

Mongo:
port 27017:27017
volume data-mongo:/data/db
volume data-mongo:/data/configdb
restart-policy: unless-stopped

ElasticSearch:
volume data-eleasticsearch:/usr/share/elaticsearch/data
restart-policy: unless-stopped

Then for GrayLog I installed via cli, Portainer does not support linking:

docker run -d --name=graylog --link mongo --link elasticsearch -p 12201:12201 -p 1514:1514 -p 9001:9000 -p 5555:5555 -e GRAYLOG_HTTP_EXTERNAL_URI="http://127.0.0.1:9000/" -e GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 -e GRAYLOG_PASSWORD_SECRET=16characterstarterstring -v data-graylog:/usr/share/graylog/data --restart unless-stopped graylog/graylog:4.3.0

I later edited the container via Portainer to extend the listening port range and to turn on UDP as required. I wanted to use docker-compose as a stack, but I was having issues and instead of debugging I approached each app separately. Not as clean maybe, but easier to get it going.

After logging into GrayLog, changing to DarkMode (perhaps the most import step ;)), updating the user/pass/timezone, I started setting up the input, extractor, indices, and stream/stream rules.

Input: syslog udp -> Set Title, Port, check "Save full message"
Extractor: Import from https://github.com/IRQ10/Graylog-OPNsense_Extractors
Indices: -> Set (Title = Description =  Prefix = "opnsense"), Rotation Strategy -> Index Size, Max Size = 524288000, Max Number of Indices = 10
Stream: -> Set Title, Index Set -> opnsense
Stream Rules: gl2_source_input must match exactly 628e665caaa5017cfbc3f1ab, facility_num must be greater than 0

For me, once I had messages to look at, I could get "628e665caaa5017cfbc3f1ab" from "Show Received Messages" when looking at the Input.

I configured OpnSense to send the syslog.

System->Settings->Logging/Targets->New
Transport->UDP(4), Applications->Filter, Set Host/Port, do NOT check rfc5424

Checking rfc5424 (Syslog) format seemed like a good idea, but it will not work with the extractor.

At this point you should have basic FW logs making their way into GrayLog with all headers defined and searchable. This is as far as I have made it. Hopefully it is a starting point for someone else.

*** EDIT ***
Performance -

I am aggregating roughly 500MBs of logs per day
I have my dockers set up in a ProxMox LXC with 12 CPU cores / 10 GB memory on a DL360
Currently have 7 containers spun up including the 3 for logging
Metrics: averaging < 2% CPU usage and 5GB memory for entire LXC


#7
OK, looking further into this, apparently there are logs that are coming over along with a bunch of "garbage?". Since I do not know what it is I will just call it that. I added a rule to throw out anything with a facility_num <= 0. That seems to have cleaned up the logs.
#8
OK, I have searched, and I have not seen this issue, I am sure that I am overlooking something (hopefully simple).

I have installed Graylog 4.3 + Mongo 4.2 + Elasticsearch 7.17. I have setup inputs (and extractors), indices, and streams in GrayLog, I have this on port 1514 and then created a logging target in OpnSense UDP(4) everything left as default except the hostname and port. I see ingress and I can see the logs and messages, communication seems to be working.

My problem is that the logs are not human readable. It seems like there is encoding that is happening and I am not sure how to work it out. This is what a log looks like in GrayLog:

2022-05-25 16:52:25.651 172.17.0.1
�>�b�^�J�\���l��PJS0G�0�0��5�@"P���JS0M�0�05�)

k


Any ideas?

Thanks!

*** UPDATE ***

I configured a Unifi Controller to send syslogs and in GrayLog they ARE human readable. So, it appears that it is something with OpnSense.