[SOLVED] 19.1.7 update fails - disk full

Started by chemlud, May 05, 2019, 03:18:28 PM

Previous topic - Next topic
Thanks for reply!

Will check asap, hopefully before end of next week...

Unbound building was not on this machine (but on a 64bit full install, which updated 19.1.7 just fine, except for unbound 1.9.1 still not working with LibreSSL and DNS-over-TLS ;-) ). As you can see the disk full can appear even with php stuff installing (see pic appended above). IIRC the error with ruby came up when trying to install 19.1.7 ***again*** after first try failing with "disk full". In principle it's a plain vanilla nano setup about 2 years ago and updated regularly.
kind regards
chemlud
____
"The price of reliability is the pursuit of the utmost simplicity."
C.A.R. Hoare

felix eichhorns premium katzenfutter mit der extraportion energie

A router is not a switch - A router is not a switch - A router is not a switch - A rou....

That does not exempt it from adding temporary files over the years which may linger depending on your usage of the system.


Cheers,
Franco

PS: there may be files hidden in /var or /tmp underneath your RAM MFS mount points ;)

Quote from: chemlud on May 09, 2019, 08:51:39 AM
Are we really the only two users in the whole wide world with thisproblem? :-/

Lol, maybe we are the only people left running from a Nano image  :P

This is the output of du -a | cut -d/ -f2 | sort | uniq -c | sort -nr

42408 usr
1800 boot
356 etc
345 var
161 root
  89 sbin
  77 lib
  75 dev
  73 tmp
  72 conf
  38 bin
  12 home
   9 libexec
   1 sys
   1 rescue
   1 proc
   1 net
   1 mnt
   1 media
   1 entropy
   1 COPYRIGHT


The largest folders in usr are 8397 python3.6
7841 python2.7
3373 perl5


So, my issue is completely unrelated to temporary files. NANO images simply don't have enough inodes. Also, var and tmp are RAM disks, as default in NANO images.

Could be, that's why I put the following into the notes:

https://github.com/opnsense/changelog/blob/9473107466ada781911c6a9a8c353f9c8ee9ee9c/doc/19.1/19.1.7#L11-L14

I'm not sure how to solve it just yet, but the problem is pretty clear. Maybe this only happens on 4GB cards. Long term goal is obviously to get rid of Python 2.7, but it'll be hard to kill a little while longer...


Cheers,
Franco

Yeah I definitely saw that. I checked my disk space usage first, didn't even think about inodes until the update failed half way through the packages. All well, I took a config backup first. Will reload it from ISO, and hopefully be able to specify a custom format command during the installer.

The issue is in the build system, wherever the format command lies for the (assuming memory?) disk that the NANO disk image file is eventually created from.

The UFS filesystem just needs to be created with more inodes during the build process ;-p

That we could change for 19.7 gladly. I think the code is this...

https://github.com/opnsense/tools/blob/master/build/nano.sh#L61-L62

We do not specify a whole lot yet.


Cheers,
Franco

Looks like a winner to me :D

Output from my stock NANO install:

Filesystem                Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/ufs/OPNsense_Nano    4.0G    1.3G    2.4G    36%     46k  2.5k   95%   /
devfs                     1.0K    1.0K      0B   100%       0     0  100%   /dev
tmpfs                      76M     16M     60M    21%     198  2.1G    0%   /var
tmpfs                      63M    2.5M     60M     4%      73  2.1G    0%   /tmp
devfs                     1.0K    1.0K      0B   100%       0     0  100%   /var/unbound/dev
devfs                     1.0K    1.0K      0B   100%       0     0  100%   /var/dhcpd/dev


Looks like it has close to 50,000 inodes, and is out. Maybe make it 100,000?

When I reload my install, I will probably do something like 250,000, just for future proofing. You can always expand the partition on a larger disk later in life. You can't increase inodes though, without migrating to a fresh filesystem.

Doesn't it decrease the block size?

https://www.freebsd.org/cgi/man.cgi?query=makefs&apropos=0&sektion=8&manpath=FreeBSD+11.2-RELEASE&arch=default&format=html

Looks like something with bsize and density could do the trick. But I don't want to kill speed or longevity for SD cards too much for the sake of being future proof. We'll have to compromise a little, ok?


Cheers,
Franco

May 10, 2019, 07:31:02 PM #25 Last Edit: May 10, 2019, 07:38:06 PM by ky41083
My lunch break just ended :'( I will read this before I reload my current install, sounds like I will need to review it anyhow.

I completely agree with you, block size for any modern filesystem should never be less than 4K. I have read about people seeing cheap flash drives with 8K blocks. Ick, lol.

Again, I will post here after I get a chance to read what you linked.

Thank you very much for working with me on this!

May 10, 2019, 07:37:23 PM #26 Last Edit: May 10, 2019, 07:40:46 PM by ky41083
Oh I see. No, it doesn't effect block size. It is simply a calculation of how many bytes of storage you want per inode. So, basically, bytes*$value

This talks about it in more detail:
https://forums.freebsd.org/threads/ufs2-inodes.51236/

The above also states that fragsize is a component of it. More homework is needed on my end ???

Scratch that, frag size is only a component of the default if nothing is specified. If you specify it, it is simplybytes*$value=$inodesTotal

May 12, 2019, 10:36:35 PM #27 Last Edit: May 13, 2019, 12:10:32 PM by ky41083
Ok, from the ISO installer, default settings, guided install mode, approximate inodes per disk size:

3GB = 400,000 inodes
4GB = 600,000 inodes
8GB = 1,100,000 inodes

So, I think it is probably pretty reasonable to bump the inode count up on the 3GB NANO image. At LEAST roughly 400,000 inodes, as this is what newfs uses by default. newfs is also defaulting to 32k block size & 4k fragment size. Those inode numbers aren't exact, but it's fairly easy to math them out using
QuoteThe default is to create an inode for every (2 * frag-size) bytes of data space.
from https://www.freebsd.org/cgi/man.cgi?newfs(8)

I decided to use an 8GB disk this time, and keep the default of 1.1m inodes.


I would personally go with 1,500,000 inodes, or possibly even more. My thinking is that the NANO image is auto expanding. So if you write it to a 250GB SSD, on first boot, the size of the filesystem is going to expand to 250GB, but the inode count will remain the same. I know we can't preplan for every use case, but I do feel like 250,000 isn't enough. Think of the 512GB microSD cards, lol. We also aren't talking about a notable loss of usable space to store inode records. I suppose that might be worth finding out first, the actual size of an inode or block of inodes in bytes.

The only reason I didn't go with more inodes than that, on this particular install, is because it is primarily used as an OpenVPN endpoint. Possibly a few more network services down the road, like DHCP, DNS, NTP, Dynamic DNS, SMTP smart host, etc. But for the most part, this specific install will never be a true edge device, it will live behind the edge device, and as such, won't see a ton of plugins installed.

If I get some time today, I will look around and see if I can find inode size in bytes for UFS2.