1
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
3
24.7 Production Series / OpenVPN Instances - Buffer Size / TLS Version Minimum / NetBIOS
« on: August 27, 2024, 04:44:19 am »
Hello! Posting first before submitting a feature request... Looking at migrating my OpenVPN servers over from legacy to Instances. I'm noticing a few advanced options I use are missing and curious if anyone else feels they should be included.
Buffer size: I always set sndbuf + rcvbuf as well as push them to the client. This is extremely important for mitigating bandwidth bottlenecks, especially on faster and/or higher latency connections. Would it make sense to request an option for each with a text box where the value can be entered in bytes, with an accompanying checkbox to push the custom value to clients? Essentially achieve an affect similar to:
sndbuf 2097152
push "sndbuf 2097152"
rcvbuf 2097152
push "rcvbuf 2097152"
TLS Version Minimum: The option I use to meet compliancy policy requirements & prevent TLS downgrade attacks. Would it make sense for this to be a drop down option w/ 1.2, 1.3, and Highest as options? This would achieve something similar to the following:
# Use 1.2
tls-version-min 1.2
# Use 1.3
tls-version-min 1.3
# Use Highest Supported
tls-version-min 0.0 or-highest
Disable NetBIOS: And last, the push options list would be a good place for this. Disable NetBIOS name lookups to cut down on VPN traffic. Maybe called "push disable-nbt". This would achieve the following:
push "dhcp-option DISABLE-NBT"
I request your feedback on the above. Thank you!
Buffer size: I always set sndbuf + rcvbuf as well as push them to the client. This is extremely important for mitigating bandwidth bottlenecks, especially on faster and/or higher latency connections. Would it make sense to request an option for each with a text box where the value can be entered in bytes, with an accompanying checkbox to push the custom value to clients? Essentially achieve an affect similar to:
sndbuf 2097152
push "sndbuf 2097152"
rcvbuf 2097152
push "rcvbuf 2097152"
TLS Version Minimum: The option I use to meet compliancy policy requirements & prevent TLS downgrade attacks. Would it make sense for this to be a drop down option w/ 1.2, 1.3, and Highest as options? This would achieve something similar to the following:
# Use 1.2
tls-version-min 1.2
# Use 1.3
tls-version-min 1.3
# Use Highest Supported
tls-version-min 0.0 or-highest
Disable NetBIOS: And last, the push options list would be a good place for this. Disable NetBIOS name lookups to cut down on VPN traffic. Maybe called "push disable-nbt". This would achieve the following:
push "dhcp-option DISABLE-NBT"
I request your feedback on the above. Thank you!
4
24.1 Legacy Series / Re: Copy Multiple Firewall Rules
« on: April 19, 2024, 06:54:18 pm »
If this is not a feature, can it potentially be added?
5
24.1 Legacy Series / Copy Multiple Firewall Rules
« on: April 07, 2024, 11:06:48 pm »
Is it possible to copy multiple firewall rules to a different interface? I know you can copy a single rule one at a time.
I've searched all over, and tried many things in the GUI.
I've searched all over, and tried many things in the GUI.
6
High availability / CARP DHCP Cluster - Failover Split Ignored
« on: December 30, 2023, 10:09:27 pm »
I have a 2 node DHCP CARP cluster setup. I only want the primary handing out leases (whichever happens to be primary at the time) so I've set the Failover split to 256. XMLRPC Sync has replicated the Failover split value of 256 to the secondary node (the text help states it will be ignored on the secondary so this should be fine, but the text help also says to leave the value blank on the secondary which XMLRPC Sync does not do.)
However, it seems both the primary and backup are still handing out DHCP leases. I tried fixing this by setting the response delay on the backup, but that value gets wiped out as soon as the config gets synced from the primary so that won't work.
Any idea how I can get only the primary to hand out DHCP leases via the Failover split value?
However, it seems both the primary and backup are still handing out DHCP leases. I tried fixing this by setting the response delay on the backup, but that value gets wiped out as soon as the config gets synced from the primary so that won't work.
Any idea how I can get only the primary to hand out DHCP leases via the Failover split value?
7
21.7 Legacy Series / Re: VPN Security policies
« on: September 03, 2021, 05:00:55 pm »
Screenshots of interface rules?
8
19.1 Legacy Series / Re: [SOLVED] 19.1.7 update fails - disk full
« on: May 23, 2019, 11:47:33 am »
Not a problem
Thank you for all the fixing
Thank you for all the fixing
9
19.1 Legacy Series / Re: 19.1.7 update fails - disk full
« on: May 19, 2019, 10:48:12 pm »
Looks good to me. Booted, checked inodes, configured a few simple services, nothing broke.
10
19.1 Legacy Series / Re: 19.1.7 update fails - disk full
« on: May 15, 2019, 06:53:23 pm »
Right, agreed. I like 500,000 better
That definitely makes things better than they were. At around 50k, simply installing a NANO image, and updating to current, will brick the install.
Thank you sir!
PS: If you want me to try a test image, I can do that, but not until the weekend.
That definitely makes things better than they were. At around 50k, simply installing a NANO image, and updating to current, will brick the install.
Thank you sir!
PS: If you want me to try a test image, I can do that, but not until the weekend.
11
19.1 Legacy Series / Re: 19.1.7 update fails - disk full
« on: May 13, 2019, 08:18:44 pm »
The size of a single UFS2 inode appears to be 256 bytes.
Reference: http://www.ico.aha.ru/h/The_Design_and_Implementation_of_the_FreeBSD_Operating_System/ch08lev1sec2.htm
Disk space per inode count examples:
5,000,000 = 1221MB
1,500,000 = 366MB
1,000,000 = 244MB
500,000 = 122MB
250,000 = 61MB
I feel like we can get away with a little more than 250,000
Reference: http://www.ico.aha.ru/h/The_Design_and_Implementation_of_the_FreeBSD_Operating_System/ch08lev1sec2.htm
Disk space per inode count examples:
5,000,000 = 1221MB
1,500,000 = 366MB
1,000,000 = 244MB
500,000 = 122MB
250,000 = 61MB
I feel like we can get away with a little more than 250,000
12
19.1 Legacy Series / Re: 19.1.7 update fails - disk full
« on: May 13, 2019, 12:06:54 pm »
I would personally go with 1,500,000 inodes, or possibly even more. My thinking is that the NANO image is auto expanding. So if you write it to a 250GB SSD, on first boot, the size of the filesystem is going to expand to 250GB, but the inode count will remain the same. I know we can't preplan for every use case, but I do feel like 250,000 isn't enough. Think of the 512GB microSD cards, lol. We also aren't talking about a notable loss of usable space to store inode records. I suppose that might be worth finding out first, the actual size of an inode or block of inodes in bytes.
The only reason I didn't go with more inodes than that, on this particular install, is because it is primarily used as an OpenVPN endpoint. Possibly a few more network services down the road, like DHCP, DNS, NTP, Dynamic DNS, SMTP smart host, etc. But for the most part, this specific install will never be a true edge device, it will live behind the edge device, and as such, won't see a ton of plugins installed.
If I get some time today, I will look around and see if I can find inode size in bytes for UFS2.
The only reason I didn't go with more inodes than that, on this particular install, is because it is primarily used as an OpenVPN endpoint. Possibly a few more network services down the road, like DHCP, DNS, NTP, Dynamic DNS, SMTP smart host, etc. But for the most part, this specific install will never be a true edge device, it will live behind the edge device, and as such, won't see a ton of plugins installed.
If I get some time today, I will look around and see if I can find inode size in bytes for UFS2.
13
19.1 Legacy Series / Re: 19.1.7 update fails - disk full
« on: May 12, 2019, 10:36:35 pm »
Ok, from the ISO installer, default settings, guided install mode, approximate inodes per disk size:
3GB = 400,000 inodes
4GB = 600,000 inodes
8GB = 1,100,000 inodes
So, I think it is probably pretty reasonable to bump the inode count up on the 3GB NANO image. At LEAST roughly 400,000 inodes, as this is what newfs uses by default. newfs is also defaulting to 32k block size & 4k fragment size. Those inode numbers aren't exact, but it's fairly easy to math them out using
I decided to use an 8GB disk this time, and keep the default of 1.1m inodes.
3GB = 400,000 inodes
4GB = 600,000 inodes
8GB = 1,100,000 inodes
So, I think it is probably pretty reasonable to bump the inode count up on the 3GB NANO image. At LEAST roughly 400,000 inodes, as this is what newfs uses by default. newfs is also defaulting to 32k block size & 4k fragment size. Those inode numbers aren't exact, but it's fairly easy to math them out using
Quote
The default is to create an inode for every (2 * frag-size) bytes of data space.from https://www.freebsd.org/cgi/man.cgi?newfs(
I decided to use an 8GB disk this time, and keep the default of 1.1m inodes.
14
19.1 Legacy Series / Re: 19.1.7 update fails - disk full
« on: May 10, 2019, 07:37:23 pm »
Oh I see. No, it doesn't effect block size. It is simply a calculation of how many bytes of storage you want per inode. So, basically, bytes*$value
This talks about it in more detail:
https://forums.freebsd.org/threads/ufs2-inodes.51236/
The above also states that fragsize is a component of it. More homework is needed on my end
Scratch that, frag size is only a component of the default if nothing is specified. If you specify it, it is simply
This talks about it in more detail:
https://forums.freebsd.org/threads/ufs2-inodes.51236/
Scratch that, frag size is only a component of the default if nothing is specified. If you specify it, it is simply
Code: [Select]
bytes*$value=$inodesTotal
15
19.1 Legacy Series / Re: 19.1.7 update fails - disk full
« on: May 10, 2019, 07:31:02 pm »
My lunch break just ended I will read this before I reload my current install, sounds like I will need to review it anyhow.
I completely agree with you, block size for any modern filesystem should never be less than 4K. I have read about people seeing cheap flash drives with 8K blocks. Ick, lol.
Again, I will post here after I get a chance to read what you linked.
Thank you very much for working with me on this!
I completely agree with you, block size for any modern filesystem should never be less than 4K. I have read about people seeing cheap flash drives with 8K blocks. Ick, lol.
Again, I will post here after I get a chance to read what you linked.
Thank you very much for working with me on this!