Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Roger@Opnsense

#1
Cool, thanks for the explanation/reassurance!
#2
Thanks Franco! I am no longer stuck on 18.7. Running that looked like it was bumping me to 19.1.3 and that is what my Dashboard reports too after completion, and my change log. It seemed that there were still some upgrades available though which were for 19.1.2. That did not seem to make sense, but I have applied everything and now it is running happily with dashboard reporting 19.1.3 and audit reporting 19.1.2 and no updates available. Hopefully I didn't break anything and versions will smooth out at the release following 19.1.3

#3
I have gone through this upgrade process quite a few times now and the kernel parts seem fine, but opnsense never makes it to new new version. I tried doing an update on the console instead and I see that the issue may well be related to a package I have installed.

https://www.sunnyvalley.io/sensei
#4
19.1 Legacy Series / Upgrade from 19.1 to 18.7.10
March 10, 2019, 02:03:45 PM
I clicked on the unlock to perform the upgrade from 18.7.10 to 19.1 and got the following message.

***GOT REQUEST TO UPGRADE: maj***
Fetching packages-19.1-OpenSSL-amd64.tar: .................................. done
Fetching base-19.1-amd64.txz: ............ done
Fetching kernel-19.1-amd64.txz: .............................................. failed, no signature found
***DONE***

At this point it seemed like there was no way to upgrade and so searching the forum I found:

Rather than remove the files suggested in the final post there, I mv'ed them out of the way and things looked promising. I had an upgrade button again and so I proceeded with the upgrade. All looked good, a few reboots, etc and the system was back up and functional.

Checking the versions now, though, things do not look right. The lobby dashboard now shows

Versions   OPNsense 18.7.10_4-amd64
FreeBSD    11.2-RELEASE-p8-HBSD
OpenSSL    1.0.2q 20 Nov 2018


And my upgrade screen shows what it did previously (That I am on 19.7.10) but only very briefly. After s second or two, It changes its mind and instead offers me the following upgrade options.

There are 2 updates available, total download size is 98.7MiB. This update requires a reboot.

Package Name   Current Version   New Version   Required Action
base           19.1              18.7.10       upgrade
kernel         19.1              18.7.10       upgrade


Given that things seem to be working fine but the state is looks confused, I am not sure what the safest path forward would be.






#5
Great, thanks for clarifying!
#6
19.1 Legacy Series / The next major release is: 19.1
March 09, 2019, 02:31:39 PM
I am currently running 18.7.10 and now see vulnerabilities being reported. To upgrade I have to unlock the next release. I am wanting to remain on stable releases and avoid development releases or release candidates. Initially, when 19.1 was the only choice I thought it was a release candidate and and that the unlock would be removed when it became stable, however it looks like I mis-understood as there are now multiple releases queued up (19.1, 19.1.1, 19.1.2, 19.1.3) and the forums label it as production.

Looking at https://opnsense.org/about/road-map/, I see:

  • NEXT RELEASE 19.1 - Januari 2019
  • LATEST RELEASE 18.7 - July 31, 2018

So I feel 18.7 is latest prod.

From the "book link" by 19.1.3 on the UI, I read "recommend running 18.7 until this is taken care of" so again I think 18.7 is current prod.

The forum however labels 18.7 as legacy and 19.1 as production.

So, will the unlock go away when 19.1+ is as stable as it will get or will it always require a conscious click?
Since 18.7 is EOL, I guess vulnerabilities will never get fixed, so is the 19.1 path going to lead to better security with a little upgrade risk?

#7
18.7 Legacy Series / Fix insight < 7 Days and keep data
November 23, 2018, 04:01:17 PM
I'b de grateful for any tips of fixing or troubleshooting the most granular Insight graphs.

Insight worked fine for a while, but broke for me a while back after an upgrade through the UI. I am now running 18.7.8 and had hoped my problem would go away with subsequent updates, but it has persisted. I have read numerous articles, but everything I have seen that may help looked like it would loose my current history. I am hoping to be able to fix the granular graphs and preserve my history.

My Insight works partially as follows

Interface totals (bits/sec) - Last 2 hours - No Data Available
Interface totals (bits/sec) - Last 8 hours - No Data Available
Interface totals (bits/sec) - Last 24 hours - No Data Available
Interface totals (bits/sec) - All the rest seem to work fine.

Top usage ports / sources (bytes) - Last 2 hours - No Data Available
Top usage ports / sources (bytes) - Last 8 hours - No Data Available
Top usage ports / sources (bytes) - All the rest seem fine.

I have rebooted the system.
I have reinstalled the package flowd.
I ran Reporting --> Settings --> Repair Netflow Data
I checked /var/log/flowd.log and see current data using flowd-reader flowd.log

The flow logs seem to be updating fine and rotating.

root@OPNsense:/var/log # ls -lh flowd.log*
-rw-------  1 root  wheel   1.4M Nov 23 09:34 flowd.log
-rw-------  1 root  wheel    11M Nov 23 08:17 flowd.log.000001
-rw-------  1 root  wheel    11M Nov 22 23:09 flowd.log.000002
-rw-------  1 root  wheel    11M Nov 22 18:45 flowd.log.000003
-rw-------  1 root  wheel    11M Nov 22 08:45 flowd.log.000004
-rw-------  1 root  wheel    11M Nov 21 22:08 flowd.log.000005
-rw-------  1 root  wheel    11M Nov 21 13:38 flowd.log.000006
-rw-------  1 root  wheel    11M Nov 20 22:34 flowd.log.000007
-rw-------  1 root  wheel    11M Nov 20 11:28 flowd.log.000008
-rw-------  1 root  wheel    11M Nov 19 22:27 flowd.log.000009
-rw-------  1 root  wheel    11M Nov 19 07:06 flowd.log.000010


I checked unique flowd type messages in system.log (One issue starting flowd_aggregate a while ago)

root@OPNsense:/var/log # grep flowd system.log | cut -d' ' -f 5- | sort -u
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/metadata.sqlite
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/metadata.sqlite [done]
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/src_addr_000300.sqlite
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/src_addr_000300.sqlite [done]
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/src_addr_003600.sqlite
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/src_addr_003600.sqlite [done]
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/src_addr_086400.sqlite
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/src_addr_details_086400.sqlite
OPNsense flowd_aggregate.py: sqlite3 repair /var/netflow/src_addr_details_086400.sqlite [done]
OPNsense flowd_aggregate.py: start watching flowd
OPNsense flowd_aggregate.py: startup, check database.
OPNsense flowd_aggregate.py: vacuum /var/netflow/dst_port_000300.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/dst_port_003600.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/dst_port_086400.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/interface_000030.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/interface_000300.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/interface_003600.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/interface_086400.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/src_addr_000300.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/src_addr_003600.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/src_addr_086400.sqlite
OPNsense flowd_aggregate.py: vacuum /var/netflow/src_addr_details_086400.sqlite
OPNsense flowd_aggregate.py: vacuum done
OPNsense pkg-static: flowd reinstalled: 0.9.1_3 -> 0.9.1_3
OPNsense root: /usr/local/etc/rc.d/flowd_aggregate: WARNING: failed to start flowd_aggregate
flowd_aggregate.py: start watching flowd
flowd_aggregate.py: startup, check database.
flowd_aggregate.py: vacuum /var/netflow/dst_port_000300.sqlite
flowd_aggregate.py: vacuum /var/netflow/dst_port_003600.sqlite
flowd_aggregate.py: vacuum /var/netflow/dst_port_086400.sqlite
flowd_aggregate.py: vacuum /var/netflow/interface_000030.sqlite
flowd_aggregate.py: vacuum /var/netflow/interface_000300.sqlite
flowd_aggregate.py: vacuum /var/netflow/interface_003600.sqlite
flowd_aggregate.py: vacuum /var/netflow/interface_086400.sqlite
flowd_aggregate.py: vacuum /var/netflow/src_addr_000300.sqlite
flowd_aggregate.py: vacuum /var/netflow/src_addr_003600.sqlite
flowd_aggregate.py: vacuum /var/netflow/src_addr_086400.sqlite
flowd_aggregate.py: vacuum /var/netflow/src_addr_details_086400.sqlite
flowd_aggregate.py: vacuum done
root@OPNsense:/var/log #
root@OPNsense:/var/log # grep 'failed to start flowd_aggregate' system.log
Nov  4 09:01:15 OPNsense root: /usr/local/etc/rc.d/flowd_aggregate: WARNING: failed to start flowd_aggregate
Nov  4 09:01:24 OPNsense root: /usr/local/etc/rc.d/flowd_aggregate: WARNING: failed to start flowd_aggregate
Nov  4 09:01:31 OPNsense root: /usr/local/etc/rc.d/flowd_aggregate: WARNING: failed to start flowd_aggregate
Nov  4 09:06:46 OPNsense root: /usr/local/etc/rc.d/flowd_aggregate: WARNING: failed to start flowd_aggregate
Nov  4 09:15:22 OPNsense root: /usr/local/etc/rc.d/flowd_aggregate: WARNING: failed to start flowd_aggregate

I took a shot at making sure databases were valid.

# pwd                                                                                                                                                                             
/var/netflow
# for sqlite in `ls *sqlite`
do
echo $sqlite; sqlite3 ${sqlite} 'pragma integrity_check;' '.exit'
done
dst_port_000300.sqlite
ok
dst_port_003600.sqlite
ok
dst_port_086400.sqlite
ok
interface_000030.sqlite
ok
interface_000300.sqlite
ok
interface_003600.sqlite
ok
interface_086400.sqlite
ok
metadata.sqlite
ok
src_addr_000300.sqlite
ok
src_addr_003600.sqlite
ok
src_addr_086400.sqlite
ok
src_addr_details_086400.sqlite
ok



I did not do I ran Reporting --> Settings --> Reset Netflow Data
I have been avoiding this as I did not want to loose what it has collected.