Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - XabiX

#16
Thanks all, I had the same issue and disabled IPS for now.
#17
22.1 Legacy Series / Re: IPv6 working properly???
February 01, 2022, 09:38:08 PM
i would be happy to share my remote connexion ;)
#18
22.1 Legacy Series / Re: IPv6 working properly???
January 30, 2022, 09:31:48 PM
Hello all,

I am dissappointed as I am facing also issue since the upgrade on IPv6. Not sure why but can t even get to ping ipv6.google.com.

root@OPNsense:~ # ping6 -I vtnet4 2a00:1450:400a:804::2004
PING6(56=40+8+8 bytes) 2a01:e0a:3ba:cb90::2 --> 2a00:1450:400a:804::2004

vtnet4: flags=8a63<UP,BROADCAST,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: POP
        options=800a8<VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,LINKSTATE>
        ether da:dc:fd:fa:f7:7c
        inet 192.168.0.254 netmask 0xffffff00 broadcast 192.168.0.255
        inet6 2a01:e0a:3ba:cb90::2 prefixlen 64
        inet6 fe80::d8dc:fdff:fefa:f77c%vtnet4 prefixlen 64 scopeid 0x5

Routing tables
Internet6:
Destination                       Gateway                       Flags   Nhop#    Mtu    Netif Expire
default                           fe80::72fc:8fff:fe6a:95d%vtnet4 UGS       6   1500   vtnet4
::1                               link#7                        UHS         1  16384      lo0
2000::/3                          fe80::72fc:8fff:fe6a:95d%vtnet4 UGS       7   1500   vtnet4
2a01:e0a:3ba:cb90::/64            link#5                        U           5   1500   vtnet4
2a01:e0a:3ba:cb90::2              link#5                        UHS         4  16384      lo0
fe80::%vtnet4/64                  link#5                        U           5   1500   vtnet4
fe80::d8dc:fdff:fefa:f77c%vtnet4  link#5                        UHS         4  16384      lo0
fe80::%lo0/64                     link#7                        U           3  16384      lo0
fe80::1%lo0                       link#7                        UHS         2  16384      lo0

traceroute6 to 2a00:1450:400a:804::2004 (2a00:1450:400a:804::2004) from 2a01:e0a:3ba:cb90::2, 64 hops max, 28 byte packets
1  2a01:e0a:3ba:cb90::2  3048.035 ms !A  3014.750 ms !A  2999.995 ms !A

2022-01-30T21:14:52   Error   opnsense   /system_gateways.php: ROUTING: setting IPv6 default route to fe80::72fc:8fff:fe6a:95d   
2022-01-30T21:14:52   Error   opnsense   /system_gateways.php: ROUTING: IPv6 default gateway set to opt3

interface FW allow all ipv4 and ipv6 to go out.

Any idea?

Merci
#19
Hello,

I have some outgoing traffic be block from my LAN called POP to my Internet called WAN. I can't understand why sometimes it's OK and why sometimes it's blocked.

Any idea? Is it based on the tcpflags or out of band packet? Is it anything to be worried about or it s normal if the client is not well developped?

Thanks
XabiX
#20
Any update on this, how to get rid of the warning?

When is the new Suricata supposed to come in OPNSense?
#21
Looking better this morning  :D

PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
11 root 155 ki31 0 32K CPU1 1 81:05 100.00% [idle{idle: cpu1}]
0 root -16 - 0 880K swapin 0 669:39 0.00% [kernel{swapper}]
17217 root 20 0 26M 23M select 1 2:21 0.00% /usr/local/bin/python3 /usr/local/opnsense/scripts/netflow/flowd_aggregate.py (python3.7)
57611 root 20 0 2750M 664M nanslp 1 0:38 0.00% /usr/local/bin/suricata -D --netmap --pidfile /var/run/suricata.pid -c /usr/local/etc/suricata/suricata.yaml{suricata}
#22
Hello Team and Experts,

I am happy to have joined OPNsense since a long time on PFsense !

I was running 20.1.2 without any issue and since the upgrade to 20.1.5 my AMD Ryzen 7 3700X 8-Core Processor (2 cores) are at 100% because of Netflow. I tried removing the interfaces (clear all) to deactivate Netflow but still the same (so I put it back as it was).

Any idea of what can be the issue?
100.00%   /usr/local/bin/python3 /usr/local/opnsense/scripts/netflow/flowd_aggregate.py (python3.7)


ls -lah /var/netflow/*
-rw-r-----  1 root  wheel   3.1M Apr 23 22:41 /var/netflow/dst_port_000300.sqlite
-rw-r-----  1 root  wheel    61K Apr 23 22:41 /var/netflow/dst_port_000300.sqlite-journal
-rw-r-----  1 root  wheel   848K Apr 23 22:41 /var/netflow/dst_port_003600.sqlite
-rw-r-----  1 root  wheel    33K Apr 23 22:41 /var/netflow/dst_port_003600.sqlite-journal
-rw-r-----  1 root  wheel   2.5M Apr 23 22:41 /var/netflow/dst_port_086400.sqlite
-rw-r-----  1 root  wheel    61K Apr 23 22:41 /var/netflow/dst_port_086400.sqlite-journal
-rw-r-----  1 root  wheel   7.1M Apr 23 22:41 /var/netflow/interface_000030.sqlite
-rw-r-----  1 root  wheel    93K Apr 23 22:41 /var/netflow/interface_000030.sqlite-journal
-rw-r-----  1 root  wheel   2.5M Apr 23 22:41 /var/netflow/interface_000300.sqlite
-rw-r-----  1 root  wheel    37K Apr 23 22:41 /var/netflow/interface_000300.sqlite-journal
-rw-r-----  1 root  wheel   680K Apr 23 22:41 /var/netflow/interface_003600.sqlite
-rw-r-----  1 root  wheel    33K Apr 23 22:41 /var/netflow/interface_003600.sqlite-journal
-rw-r-----  1 root  wheel    56K Apr 23 22:41 /var/netflow/interface_086400.sqlite
-rw-r-----  1 root  wheel   8.5K Apr 23 22:41 /var/netflow/interface_086400.sqlite-journal
-rw-r-----  1 root  wheel    12K Apr 23 22:41 /var/netflow/metadata.sqlite
-rw-r-----  1 root  wheel    12M Apr 23 22:41 /var/netflow/src_addr_000300.sqlite
-rw-r-----  1 root  wheel   145K Apr 23 22:41 /var/netflow/src_addr_000300.sqlite-journal
-rw-r-----  1 root  wheel   4.9M Apr 23 22:41 /var/netflow/src_addr_003600.sqlite
-rw-r-----  1 root  wheel    61K Apr 23 22:41 /var/netflow/src_addr_003600.sqlite-journal
-rw-r-----  1 root  wheel    18M Apr 23 22:41 /var/netflow/src_addr_086400.sqlite
-rw-r-----  1 root  wheel   321K Apr 23 22:41 /var/netflow/src_addr_086400.sqlite-journal
-rw-r-----  1 root  wheel    98M Apr 23 22:41 /var/netflow/src_addr_details_086400.sqlite
-rw-r-----  1 root  wheel   1.1M Apr 23 22:41 /var/netflow/src_addr_details_086400.sqlite-journal


root@OPNsense:/home/xabix # ls -lah /var/log/flowd*
-rw-------  1 root  wheel    77K Apr 23 22:58 /var/log/flowd.log
-rw-------  1 root  wheel   258M Apr 23 22:56 /var/log/flowd.log.000001
-rw-------  1 root  wheel    10M Apr 20 15:35 /var/log/flowd.log.000002
-rw-------  1 root  wheel    10M Apr 20 13:05 /var/log/flowd.log.000003
-rw-------  1 root  wheel    10M Apr 20 09:55 /var/log/flowd.log.000004
-rw-------  1 root  wheel    10M Apr 20 06:24 /var/log/flowd.log.000005
-rw-------  1 root  wheel    10M Apr 20 02:35 /var/log/flowd.log.000006
-rw-------  1 root  wheel    10M Apr 19 23:00 /var/log/flowd.log.000007
-rw-------  1 root  wheel    10M Apr 19 20:11 /var/log/flowd.log.000008
-rw-------  1 root  wheel    10M Apr 19 16:58 /var/log/flowd.log.000009
-rw-------  1 root  wheel    10M Apr 19 13:46 /var/log/flowd.log.000010


root@OPNsense:/home/xabix # df -h
Filesystem         Size    Used   Avail Capacity  Mounted on
/dev/gpt/rootfs     15G    3.1G     10G    23%    /
devfs              1.0K    1.0K      0B   100%    /dev
fdescfs            1.0K    1.0K      0B   100%    /dev/fd
procfs             4.0K    4.0K      0B   100%    /proc
devfs              1.0K    1.0K      0B   100%    /var/dhcpd/dev
devfs              1.0K    1.0K      0B   100%    /var/unbound/dev


I am launching a repair of the Netflow database to see if this fixes something. Anyway, it seems that in the past there were similar issues/patchs depending on the python releases.

Am I the only one facing the issue? Is there a way without reinstalling to reset this netflow part? I assume with a delete the netflow database but would that be enough.

Merci
XabiX