Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - tmanok

#1
Hi everyone,

I've run into a stumbling block. One of my firewall rules was created automatically via a NAT rule (port forward) that automatically generated/associated a firewall rule. The firewall rule is unable to be edited.

So what's the stumbling block? Well there is a specific item that I would like to edit that I'm not seeing in the NAT port forwarding rule. I'd like to change the interface state tracking method to synproxy, however this option does not appear to be available in the NAT rule configuration and would normally be found in the firewall rules (Firewall>Rules>Edit Rule>Advanced options>State type>synproxy)

Is there another location for this setting or must I configure a separate firewall rule for this to gain the option? Is this different in 22.1?

Thanks everyone,


Tmanok
#2
Hi Franco,

Thanks for your reply. FreeBSD is very mature with ZFS, it has had it since well before Linux ever did if I understand correctly. For example FreeNAS (TrueNAS) and NAS4Free (XigmasNAS) have had it for a long time.

I think the point here is to say that OPNSense has not had it for very long and that there are some features to be added such as pool management, scrub tasks, dataset manipulation, pool status, and more granular memory monitoring (to name a few) to be featured in the the GUI.

Thanks,


Tmanok
#3
Hi Everyone,

Synproxy is new to me and I want to better understand it's configuration. Recently, I've read about how FreeBSD is (or perhaps was) vulnerable to certain types of low-bandwidth DoS attacks. The best available recommendations I could find included synproxy as a solution. After reading the documentation, Synproxy is a state tracking method that can be used on OPNSense, though I would like to hear a more detailed explanation and be sure that it is appropriate to implement.

On a WAN interface, I have HTTPS (port 443) open to the internet as there is incoming traffic to a specific web server.

Would the appropriate configuration for Synproxy be to edit a firewall pass rule for that port, click the advanced section, and simply change the state tracking to synproxy? This sounds too simple, or like there will be caveats. What services (ports) cannot have synproxy tracking enabled? What are the caveats of synproxy?

Thanks everyone,


Tmanok
#4
Just wanted to add for anyone else administering an OPNSense server that needs to see temperature sensors:

As seen prior in this thread: sysctl -a | grep temperature
IPMI Tool: ipmi-sensors

To install ipmi tool:

opnsense-code tools ports src
cd /usr/src
git checkout stable/21.7
pkg install autoconf automake libtool
cd /usr/ports/sysutils/freeipmi
make install clean
ipmi-sensors


Cheers everyone, would love to see more information about how to install racadm, hponcfg/hp-health, and imm for future deployments if you know anything about the setups for OPNSense baremetal installations with those tools, I'd love to see a post made in these forums.


Tmanok
#5
Hi CookieMonster,

As I said, caches can be dropped, so reporting them as memory usage is "dangerous", for example in htop they are highlighted in yellow for caches, in TrueNAS they are designated a separate colour from real memory usage as well. Clearly, mature systems that have integrated ZFS have decided to differentiate the reporting of ARC cache.
Cheers,


Tmanok
#6
Hi Paul,

I believe that, you have misread what I have written:
"maybe the web interface should be updated to ignore ARC caches, now that ZFS is being officially supported by OPNSense in the installer." As you can see, I'm not asking for the system to use memory differently, I'm asking for reporting to match newly supported features. In this case, to ignore ZFS ARC caches, like any OS would not call disk buffers or caches to be application memory. Perhaps because of the importance of ARC caches, the community may disagree (fair, they are much less likely to be dropped for other processes).

Additionally, you seem to have assumed my my position on something unrelated to my question. I don't care that the memory is cached or buffered, because cached memory will "move out of the way" for application memory. My concern is how the Dashboard reports memory usage. There is a big difference between 90% memory usage when it is actively in use by say ClamAV or routing algorithms, than when there is a file buffer temporarily residing in memory. The former could lead to a kernel panic, while the latter gives me a performance increase. My question to the community is whether we should treat ARC cache like ordinary file system buffers. If they are going to be kept however, perhaps there should be a more specific indication of what portion is used by applications vs ARC.

Cheers,


Tmanok
#7
Hey Everyone,

Today I was startled by one of my routers indicating 90% memory usage, but after running vmstat -m (too noisy), top, and htop for good measure. While htop considered my usage (denoted in green) to be just 1.16GB (16%), top was rather more informative:

  • 6441M Wired
  • 472M Free
  • ARC: 5328M

Ok so what I understood, is that ZFS is the primary cache hog, while other file system or process caches make up the rest. In this particular case, I agree that the OPNSense web interface reporting was somewhat accurate, or possibly on par. However, even ARC cache can be dropped, so maybe the web interface should be updated to ignore ARC caches, now that ZFS is being officially supported by OPNSense in the installer.

Cheers, please feel free to let me know your thoughts.


Tmanok
#8
Quote from: franco on October 07, 2021, 08:12:51 AM
Hi there,

It's fair to say it's "coming soon" considering that since 21.7 we now have a default ZFS install option available. However, ZFS has been a long road and there were little outside contributions to make it happen sooner. So now the next goal would be to add maybe a ZFS widget to the dashboard, but there are no concrete plans or feature requests. I'm happy to push this along, but first we need to agree on a feature set for the widget and what it should not do to keep it simple and maintainable in the future.

As for configuring ZFS from the GUI for use cases such as snapshots/boot environments that might be part of a future business edition instead.


Cheers,
Franco

Thanks Franco! Sorry to hear that the dev team has not had much contribution to this vital feature. As for supporting ZFS in the GUI via widget or implementing snapshots, I'd like to think that snapshots would be a vital feature so that one could role back configuration mistakes, especially in critical environments. Without such a snapshotting feature, to me it would simply make sense to run OPNSense as a VM (sad) which disregards many of the other comprehensive features in the OS meant for hardware use. Edit: It is worth mentioning the fact that OPNSense can be recovered by uploading a configuration file, I am aware of this and to be fair it probably (but may not in all cases, e.g. after updating) would recover your mistakes without requiring a reinstallation.

To summarize a few more points in my mind:

  • Proxmox and TrueNAS both utilize ZFS snapshots for critical VMs / Data and have good overall ZFS GUIs
  • Configurable Notifications in the GUI like TrueNAS Core or via email like Proxmox would be desirable
  • Health status / overview with storage usage would be necessary.(Storage consumption exists in Lobby>Dashboard already for example)
  • List of disks and the pool they belong to would be very nice.(Including capacity, Model, Serial, /dev label)
  • Drive replacement would be a very very very nice feature. E.g. GUI Widget says a drive is dead (specifying serial number), easily accessible button for removing the dead drive from the pool, then adding a drive into the pool for resilvering/recovery.

Perhaps functions for the administrator such as drive replacement, snapshots, scrubs, and a task list should be somewhere else such as "System>Diagnostics" for the task log and "System>Storage>Pool Summary, Disaster Recovery, Snapshots, Scrubbing" for the rest.

Quote from: pmhausen on October 08, 2021, 09:06:28 AM
Quote from: opnfwb on October 08, 2021, 04:19:42 AM
I'd take ZFS without ECC any day over UFS without ECC, all else being equal. At least with ZFS, you get some power failure tolerance that UFS doesn't provide.

Seconded. The "scrub of death" caused by unreliable memory is a myth and has been debunked multiple times. You should have ECC in every server system. If you don't, ZFS is still the most reliable filesystem around.

https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

Too true, I've run ZFS happily on non-server grade systems without any corruption, can't speak to UFS but I am keen to avoid it. Looking forward to the new features mentioned, sounds like I'll have to wait for 22.1 and after.

Thanks for all the responses!


Tmanok
#9
Hey everyone,

I have a production system that is running 21.7.1, I'm hoping to install SMARTCTL to track health of the system's boot ZFS mirror, however I'm greeted with "Installation out of date. The update to opnsense-21.7.5 is required". That would require a reboot of our production system which cannot happen at this time.

Do I really need to upgrade the whole system to install the SMART plugin? Seems a bit ridiculous and encourages strange behaviour such as installing every single plugin "just in case" it is needed later when the OS can't be upgraded later on...

Cheers!


Tmanok
#10
Turns out, this is a hardware issue. It began affecting the machine on every startup. The issue appears to occur when there are 6 interfaces connected at the same time on startup and sometimes after it locks up with 6 interfaces, it will lockup randomly again even with 5 interfaces. However, unplugging the power, disconnecting all interfaces, booting (completely), and then connecting the interfaces works just fine. We're replacing the machine however.

Also this machine is like an R210, possibly an R210 rebranded.
Cheers,


Tmanok
#11
Hey Everyone,

Not here to make a big fuss, but I noticed something that had to be resolved with a reboot. We were running with 5/6 interfaces on a router in use, and then today I needed to create another dedicated LAN so I decided to use the last remaining interface.

We have two built-in (on-board) BCE0 BCE1 interfaces on a Dell (no model) short depth machine, and four Intel NIC interfaces. I had planned to use BCE1 as our secondary WAN eventually but priorities have changed so here I am giving it an IPv4 address and enabling the interface after plugging it in and once I hit "Apply Changes" in the top right, the WebUI loads forever.

OK so I used another browser and got the SSL warning but couldn't reach the router after that. OK try another computer, maybe even on another lan to another interface IP: Login screen, using root, click "Login" and it loads forever. Damn.

OK fine, I'll walk over to the console and add this stupid interface. Sure enough I apply the same config over console and I lose my console session after "bce1 gigabit link up!" appearing a couple times. Hit return, wait a minute, hit it again, ok F&ck this noise, so I go to CTRL+ALT+F2 and send a reboot.

Worth noting that in Console 1, when I walked over to it, my WebUI interface changes had not yet applied (bce1 had no IP assigned yet according to the console). But after I lost Console 1, Console 2 saw the IP change and it survived the reboot. Haven't touched it other than to check the WebUI works again after reboot.

BCE1 is configured with identical (aside from IPv4 address) settings to 3 other interfaces by the way. Nothing wacky going on.

Cheers, hope someone more experienced than I can shed some light on this mystery, in the meantime I have work to do.


Tmanok
#12
Hey everyone,

Hate to sound like a numpty, in fact, maybe its just late, but I'm not sure how to operate this specific health graph. (See the attached screenshot) found in Reporting > Health > Packets (WAN)

Looking at the traffic and system submenus of Health both make sense to me, in fact other Reporting pages make sense to me too, but this specific page is either stuck at the wrong number or I've done something to it. It isn't zoomed in, but I'm seeing timestamps at the bottom for like 5am and it's 9:13pm (system time on the Dashboard and other submenus or other pages is perfectly in-line with the current time).

Please let me know what I can do to revert this packet per second graph to the right time.
Cheers!

Tmanok

Edit: I also just noticed it's stuck in September, bloody thing! hahah...
#13
Quote from: pmhausen on October 04, 2021, 09:58:32 PM
But ...

you might want to look into boot environments. They take care of the snapshots and allow you to give fancy names to your various versions, even boot into past ones if you have console access.

# list current BEs
bectl list
# assume we are running 21.7 and the major update to 22.1 is waiting
# rename "default" to "21.7"
bectl rename default 21.7
# create new BE for the new version
bectl create 22.1
# activate new BE for next reboot, then reboot into it
bectl activate 22.1
reboot
# now perform UI update
# after reboot 2 BEs will be present: 21.7 and 22.1 - you can pick them at the boot loader prompt if necessary
bectl list


You can do the same with minor versions, of course. All the work is already been done. Enjoy ;)

Holy crapperjacks that's awesome! Was bectl made by/for OPNSense or is this a FreeBSD tool that I have yet to come across? Well done whoever made it, like a cross between installing a new Linux kernel and making a VM snapshot, hah!
#14
Hey Everyone,

I've installed OPNSense on a short-depth 1U server, I'm very impressed with everything so far, but there is one thing I'm missing. Although there is a SMART status plugin available, it does not give me any ZFS health information and I can't seem to find anything on the forum or in the web panel.

There are some really smart and in-the-know people on here @Oxygen and @danb to name a couple, hopefully there is someone who can let me know what I may have overlooked or if it is in the pipeline.

For some background, I've installed a ZFS mirror on two 500GB drives, I'm just hoping there is a config menu for checking the status/health and allowing me to easily remove and replace one of the drives. Thanks to Smartmontools in the Smart plugin I can see the model+serial of the drive to know which to replace, but I would have to jump on the CLI to issue zpool status type commands manually or enable SSH (not on my router please!).

Thanks everyone, I plan on being a frequent member here, especially while I'm still learning your awesome router OS.