Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - xpking

#1
Zenarmor (Sensei) / Not update to 2.0
June 15, 2025, 02:05:54 PM
Dear all,

If I go to update Opnsense then Zenarmor will also be updated to new version.
I don't want to update to Zenarmor 2.0 at the moment. Is there a way to skip the update?
Thank you.
#2
Hello,

I migrated from ISC to Kea for DCHP some months ago.
Now it seems there is Dnsmasq DNS & DHCP in opnsense.
Which direction we are going? Should I migrate again to Dnsmasq?
I don't want to migrate again and again.
Please advise. Thank you.
#3
24.7, 24.10 Legacy Series / Re: Kea Lease issue
December 11, 2024, 03:59:07 PM
Thank you all helping this issue.
I found the way to clear the leases.

Just modify the file /var/db/kea/kea-leases4.csv or simply remove the file to clear the lease.
#4
24.7, 24.10 Legacy Series / Re: Kea Lease issue
December 10, 2024, 03:23:10 AM
May I know if anyone can help please?
I just need to clear up Kea settings and leases. Thank you.
#5
24.7, 24.10 Legacy Series / Re: Kea Lease issue
December 08, 2024, 03:42:54 PM
Yes, I deleted the Reservation and added new entries.
My PC can get the new IP, but the issue is the old expired lease still there (and same IP as opnsense) which is causing issue!
I tried restarted Kea, then restarted the opnsense but still the same.

Now I changed to ISC DHCPv4 so no issues now.

But I want to resolve the Kea issue.
Is there a way to clear Kea Lease IP? Or reset the settings of Kea?
Is there any expert can provide commands or provide which files to modify?

Thank you.
#6
24.7, 24.10 Legacy Series / Kea Lease issue
December 08, 2024, 01:47:02 PM
Dear all,

I have a PC that use Kea reservation to let Kea give it static IP.
The Lease time is 30 min.
However, I don't know what wrong. I added a new PC with Kea reservation.
Then the Lease IP in Kea is the same as Opnsense firewall.
Also, it already reached the lease time but did not go away.
I ran opnsense with Zenarmor.

I added the new entry again in Kea reservation, the PC can get the correct IP.
But it cannot connect to internet. I think because the incorrect info in Kea, then Zenarmor read it then it becomes incorrect.
Anyway to remove Kea Lease IP please?
Thank you.
#7
Zenarmor (Sensei) / Re: Devices auto disappear
December 01, 2024, 02:39:08 AM
Thank you!
What a joke Zenarmor!
#8
Zenarmor (Sensei) / Devices auto disappear
November 30, 2024, 11:17:00 AM
Dear all,

May I know if there is a rule so that Zenarmor auto clear devices haven't seen for some days?
I always keep missing some devices when it didn't connected for some days.
I would like to disable this feature if it is there.
Please advise. Thank you.
#9
Dear all,

Would like to get some help on Zenarmor elasticsearch crashed issue.
Any idea why?
Previous versions no issue.
Thank you.

Opnsense: 24.7.5_3

Zenarmor
Engine 1.17.6
Database 1.17.24080514

from /var/log/elasticsearch log


[2024-10-13T08:11:53,999][WARN ][o.e.m.j.JvmGcMonitorService] [-qochfP] [gc][15036] overhead, spent [6.6s] collecting in the last [6.7s]
[2024-10-13T08:11:58,723][WARN ][o.e.m.j.JvmGcMonitorService] [-qochfP] [gc][15037] overhead, spent [4.6s] collecting in the last [4.7s]
[2024-10-13T08:12:05,345][INFO ][o.e.m.j.JvmGcMonitorService] [-qochfP] [gc][old][15038][11538] duration [6.6s], collections [1]/[6.6s], total [6.6s]/[16.5h], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [266.2mb]->[266.2mb]/[266.2mb]}{[survivor] [24.7mb]->[27.5mb]/[33.2mb]}{[old] [1.6gb]->[1.6gb]/[1.6gb]}
[2024-10-13T08:12:05,345][WARN ][o.e.m.j.JvmGcMonitorService] [-qochfP] [gc][15038] overhead, spent [6.6s] collecting in the last [6.6s]
[2024-10-13T08:15:36,638][ERROR][o.e.t.n.Netty4Utils      ] fatal error on the network layer
        at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:184)
        at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:89)
        at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
        at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:850)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:364)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:68)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at java.lang.Thread.run(Thread.java:750)
[2024-10-13T08:15:36,640][WARN ][o.e.m.j.JvmGcMonitorService] [-qochfP] [gc][15039] overhead, spent [3.5m] collecting in the last [3.5m]
[2024-10-13T08:15:36,649][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [Thread-2112], exiting
java.lang.OutOfMemoryError: Java heap space
[2024-10-13T08:15:36,668][ERROR][o.e.i.e.Engine           ] [-qochfP] [zenarmor_0000000000_ae574286-1542-4e33-9083-00060973b130_tls-241013][0] merge failed
java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.util.bkd.BKDWriter$MergeReader.<init>(BKDWriter.java:336) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.util.bkd.BKDWriter.merge(BKDWriter.java:538) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:212) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4356) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3931) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:99) ~[elasticsearch-5.6.16.jar:5.6.16]
        at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:661) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
[2024-10-13T08:15:36,679][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[-qochfP][generic][T#4]], exiting
java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.util.bkd.BKDWriter$MergeReader.<init>(BKDWriter.java:336) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.util.bkd.BKDWriter.merge(BKDWriter.java:538) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:212) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:173) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4356) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3931) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
        at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:99) ~[elasticsearch-5.6.16.jar:5.6.16]
        at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:661) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]



#10
I rebooted again and the issues have gone.
Thank you.
#11
24.7, 24.10 Legacy Series / Updated to 24.7.5_3 issue
October 02, 2024, 05:40:37 AM
Hello,

After I updated to 24.7.5_3, the following issues occurred.
Please help!

1. clamav_mon and freshclam_mon cannot start.
In the ClamAV logs I don't see any error messages.
2. In System, Firmware, Status page. My device update and rebooted but still showing this message.
I tried other browser still the same.

Your device is rebooting.
The upgrade has finished and your device is being rebooted at the moment, please wait...
#12
Thank you.
#13
Thank you for your reply.
I read the post saying there's an OS upgrade in 24.7 too.
I am using ZFS, am I still safe to use the boot environment?
Sorry that I don't have much knowledge to ZFS and BSD.
#14
Dear all,

May I know if there's a safe way to upgrade from 24.1.10 to 24.7?
It seems like a huge update.
My safe way means if it fails or cannot boot after upgrade, I can revert back to 24.1.10 immediately.
Thank you.
#15
24.1, 24.4 Legacy Series / Kea DHCPv4 subnet
April 30, 2024, 02:06:32 AM
Dear all,

May I know what's wrong when I am setting the subnet?
I don't know why this is not a valid subnet. Please help.

Subnet: 192.168.5.0   (error: Please specify a valid network segment or IP address.)
Pools: 192.168.5.1 - 192.168.5.254