Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - ejprice

#1
22.1 Legacy Series / How to switch HBA driver?
July 08, 2022, 06:34:25 PM
Greetings everyone!

We're running OpnSense on a Dell T340 with a PERC H730 controller. OpenSense is giving some hardware errors with the mfi driver.

mfi0: I/O error, cmd=0xfffffe00d9307540, status=0x3c, scsi_status=0
mfi0: sense error 0, sense_key 0, asc 0, ascq 0


On the 21.7, fixing it was as simple as adding mrsas_load="YES" to the /boot/loader.conf.local file and that resolved the issue.

The file still exists on the system after a command line upgrade to 22.1 - HOWEVER, FreeBSD/OpnSense is back to loading the mfi driver, causing the above error.

mfisyspd1 on mfi0
mfisyspd1: 457862MB (937703088 sectors) SYSPD volume (deviceid: 1)
mfisyspd1:  SYSPD volume attached
mfi0: 9143726 (boot + 28s/0x0002/info) - Inserted: PD 20(c None/p1) Info: enclPd=20, scsiType=d, portMap=00, sasAddr=53cea0f09900f200,0000000000000000
mfi0: 9143727 (boot + 28s/0x0002/info) - Inserted: PD 00(e0x20/s0)
mfi0: 9143728 (boot + 28s/0x0002/info) - Inserted: PD 00(e0x20/s0) Info: enclPd=20, scsiType=0, portMap=00, sasAddr=4433221106000000,0000000000000000
mfi0: 9143729 (boot + 28s/0x0002/info) - Inserted: PD 01(e0x20/s1)
mfi0: 9143730 (boot + 28s/0x0002/info) - Inserted: PD 01(e0x20/s1) Info: enclPd=20, scsiType=0, portMap=01, sasAddr=4433221107000000,0000000000000000
mfi0: 9143731 (boot + 28s/0x0020/info) - Controller operating temperature within normal range, full operation restored
mfi0: 9143732 (710612434s/0x0020/info) - Time established as 07/08/22 16:20:34; (28 seconds since power on)
mfi0: 9143733 (710612471s/0x0020/info) - Host driver is loaded and operational
Trying to mount root from zfs:zroot/ROOT/default []...



Does anyone know how to force the loading of the mrsas driver over the mfi driver?

Thanks in advance!
#2
Just reporting back that rolling back to Suricata 6.0.3 fixed the issue.
#3
I am experiencing the same issue with the recent update. 21.7.6 8GiB of memory, ET ruleset. Everything was fine prior.
#4
20.1 Legacy Series / Re: Metallb and Kubernetes
November 29, 2020, 01:20:18 PM
Quote from: kya on November 29, 2020, 12:03:44 PM
I'm using Calico for CNI.

Awesome. That's what I was planning on using. Thanks again for the info!
#5
20.1 Legacy Series / Re: Installation on ZFS
November 29, 2020, 12:37:02 AM
Quote from: pmhausen on November 28, 2020, 11:46:02 PM
There's a plethora of articles on why you should never ever run ZFS whitout ECC memory, because it will destroy your data on disk. Unfortunately all of these articles are a load of bull...

The "scrub of death" is a myth.

ZFS whihtout ECC is still better than any other filesystem whithout ECC with respect to preserving your data.

Period.

Read about it here:
https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

Attend the ZFS developer meetings - they are public.

Listen to bsdnow.tv


But please stop spreading unsubstantiated FUD ...

Actually, I'm subscribed to the OpenZFS mailing list and read the posts every day. The most frequent troubleshooting question asked when someone has a bizarre problem is "Do you have ECC memory?"

So, in your opinion, it's FUD. I disagree. I think the additional resiliency ECC provides, particularly to ZFS, is worth the small extra outlay. I suggested readers READ on their own and make their own decision - which is certainly not FUD but encouraging people to do their own research and form their own opinion.

#6
20.1 Legacy Series / Re: Installation on ZFS
November 28, 2020, 08:15:43 PM
I'm a huge fan of ZFS. It is an awesome, reliable, performant filesystem with undeniable resiliency. It is way better than UFS and pretty much every single FS out there. I used ZFS everywhere I can.

Now that I see a clear path to use ZFS on OpnSense, I'm already thinking of ways to do it - because ZFS is that awesome, and most importantly, resilient. 

However, to use ZFS without ECC memory is a bad idea. I won't get into the why(s) as you can just search for the plethora of articles on it. Yes, you can do it - but no, you really, really don't want to.

Also, to get the most bang for your buck with ZFS you should have a separate, fast, small SLOG device for the ZIL. To keep things cost-effective you can combine the SLOG device and the L2ARC on the same NVME and still expect reasonable performance.

Hope that helps.
#7
20.1 Legacy Series / Re: Metallb and Kubernetes
November 28, 2020, 07:57:29 PM
Hi @kya

I'm just curious, which CNI are you using with Metallb?

(I was just looking to set this up - thanks for your post!)
#8
Quote from: jds on October 31, 2018, 04:16:36 PM
Yeah, I was the OP on that other thread, and my 'solution' was to uninstall the unifi controller and migrate it to a raspberry pi. The unifipi works great, but the best solution would probably be to have an official way to install the controller in the same box as opnsense. Hell, while I am fantisizing, why not make it a widget, too? Seriously, though, is this fix in 19.1 just because opnsense will move to freebsd 11.2, which might break the unifi installation again in the future? Or is there any plan to have an official way to have them play nicely together?

Unifi on the Raspberry Pi works incredibly well. I have two of them deployed and they work flawlessly. I would also say, just because you could run the Unifi controller on Opnsense, it might not be a great idea from a security standpoint. The software is a large java application that is essentially a "black box"; you don't know what kind of potential vulnerabilities it has. It is closed source and proprietary. The code most likely doesn't get the scrutiny that the standard, packaged, Opnsense daemons do. I'm not saying in any way it's bad software, I use it and I like it. I'm just saying we don't know what is in it and I wouldn't recommend running on a firewall. Especially when it runs like a champ on a $35 device  :)
#9
18.7 Legacy Series / Re: Empty /var/log/flowd.log
November 03, 2018, 01:30:13 AM
No, I have a strong aversion to rebooting except for kernel updates and the like.

I originally noticed there was nothing showing up in the Reporting->Insight view. The aggregator was running but flowd was stopped. I started flowd from the web UI and then checked the log file with flowd-reader but it returned no output. I restarted the daemon from the command line, still nothing. The file had bytes allocated and there was binary data in it, but I assumed it was corrupt since flowd-reader returned nothing.

Thats when I removed the file and restarted flowd again. It created the log file but it was zero bytes in size. No data was being written. I searched the forums here, found nothing and made a post. I went back for a second look and everything was working.

Weird. 
#10
18.7 Legacy Series / Re: Empty /var/log/flowd.log
November 01, 2018, 02:09:25 AM
It seems to be fixed. No idea how or why.
#11
18.7 Legacy Series / Empty /var/log/flowd.log
October 31, 2018, 02:50:17 PM
Greetings!

I have just completed a fresh install on new hardware,updated to 18.7.6 then restored my config using the web ui. For some reason flowd does not appear to be writing data to /var/log/flowd.log

I have started flowd from the shell in the foreground:

root@hades:/var/netflow # flowd -d
read_config: entering
child_get_config: entering
drop_privs: dropping privs without chroot
send_config: entering fd = 4
send_config: done
child_get_config: child config done
recv_config: entering fd = 3
recv_config: ready to receive config
Listener for [127.0.0.1]:2056 fd = 3
Adjusted socket receive buffer from 42080 to 524288
Setting socket send buf to 1024
privsep_init: entering
drop_privs: dropping privs with chroot
init_pfd: entering (num_fds = 0)
init_pfd: done (num_fds = 2)
client_open_log: entering
answer_open_log: entering
^Cprivsep_master: child exited
flowd_mainloop: monitor closed
Exiting on signal 2


I've removed the file and allowed flowd to recreate it, but still nothing.

Any pointers would be appreciated.

Thanks in advance!
#12
Hmm. My command line shows I'm using netmap. I think that is the out-of-the-box setting.

/usr/local/bin/suricata -D --netmap --pidfile /var/run/suricata.pid {...}
#13
I've tried changing some of the Suricata settings but so far no luck.
#14
Anyone else try testing this? It seems to be a very limiting factor on a SMP box.
#15
SafeStack appears to be working for me as well. I don't have IPsec configured so I can't test that.