Is this expected as the dataset grows? Is this going to become more noticeable, or (36 hours on) have I hit a steady state?
# opnsense-patch 44e9dc25b
@tuto2Hi!
but it seems to me that the process of cleanup is not optimized? Could this be the reason?
IMHO the index may not help here because of the data conversion (the time column is defined as an integer and the time data is converted during cleanup).
and is conversion even necessary if we compare two epoch values?
Removing index shows no increase in performance
What conversion are we talking about here?
biggest limiting factor, which is the fact that a connect() call on the database increases linearly with the size of the actual db.
But what Stephan means here is that reporting facilities like NetFlow/Insight and now Unbound DNS statistics require more computation time which might require more capable hardware so switching these on should only be done with that constraint in mind.Cheers,Franco
So imho in this part it is still possible to speed up something?
Regarding the connect() time (thanks for the hint. I read the docs and didn't get excited ), the only thing that comes to mind is to make the "timedelta" configurable (so that we can reduce the size of the data for realy high-load servers)?
ps. it was also noticed that the size of the database file is not shrinked after deletion, and I did not find such an option in dukdb. only export/import remains?
Seems there is something to be gained there indeed.
maybe some more observation time is needed there