Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - iamperson347

#1
Thinking through my previous post... I guess ULA's, despite the usage preference issues I mentioned in my previous post, would be fine specifically for DNS server advertisements since DNS lookups aren't even involved there. In the grand scheme of things, I'm just trying to eliminate having to make manual config changes in the firewall if my prefix gets changed.
#2
I guess I was trying to make things "fully dual stack," but it just doesn't look ideal if you can't get a static prefix. Granted, my ISP typically doesn't change it, but I know for a fact that it isn't static.

I was going to go down the ULA path as a I could make those static, but if you advertise services over both ipv4 and ipv6 internal (via DNS) to your network (using ipv6 ULA), clients seem to prefer ipv4 in that case. I think there is a proposal to change that.... and that would allow you to have true static ipv6 addressing for internal services (where the ISP doesn't provide static prefix delegations). https://datatracker.ietf.org/doc/draft-ietf-6man-rfc6724-update/

Maybe I should just stick to ipv4 DNS servers being advertised. It stinks because it feels like it's not fully full stack, but local ipv4 probably isn't going away for a very long time I would guess.
#3
Hey All - I searched around a bit and poked around on the firewall but could not determine an answer.

Let's say you get a delegated prefix for IPv6 on your WAN, and track that interface in a subnet/vlan that you have a DNS server deployed in. The DNS server gets an IPv6 address as expected. You set the "DNS Server" DHCP option setting in other subnets (via dhcpv6 which flows down to RA/SLAAC) to hand out that DNS server IPv6 address.

I can make the IPv6 address "static" via a dhcp6 static mapping for the last portion of the address, but since it's a delegate prefix, the first portion of the address could change (thus making the DHCP Server option invalid).

Opnsense has something to handle this scenario for firewall rules (Dynamic IPv6 Host aliases). Is there any such option available for use in other portions of the firewall config (such as setting the ipv6 DNS Server to hand out as described in the above scenario)? I check the ISC DHCPv6 and Dnsmasq configs and didn't see an option available.

Thanks!
#4
NP - it was your other post that prompted me to test it.

I wonder if franco has any thoughts on why this happened to some of us. Either way, I'm glad it's fixed.
#5
I think this only partially upgrades the firewall.

With that being said, you can actually set the firewall to prefer ipv4. I did this and the update process worked again. Now that I'm on 22.1rc1, I unset the prefer ipv4 option, rebooted to be safe, and update checks are still working fine.

I'm wondering if there was just some issue with pkg/fetch and ipv6 connectivity on some of the dev builds? I know ipv6 works fine for me on the latest community build and it now seems fine again on 22.1rc1.
#6

root@OPNsense:~ #  ping -c4 -s1500 pkg.opnsense.org
PING6(1548=40+8+1500 bytes) 2600:8805:7f20:200:f0c0:8e63:4c48:70d3 --> 2001:1af8:4f00:a005:5::
1508 bytes from 2001:1af8:4f00:a005:5::, icmp_seq=0 hlim=51 time=100.618 ms
1508 bytes from 2001:1af8:4f00:a005:5::, icmp_seq=1 hlim=51 time=98.166 ms
1508 bytes from 2001:1af8:4f00:a005:5::, icmp_seq=2 hlim=51 time=98.463 ms
1508 bytes from 2001:1af8:4f00:a005:5::, icmp_seq=3 hlim=51 time=98.095 ms

--- pkg.opnsense.org ping6 statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 98.095/98.835/100.618/1.038 ms
root@OPNsense:~ #
root@OPNsense:~ #
root@OPNsense:~ # ping6 -c4 -s1500 pkg.opnsense.org
PING6(1548=40+8+1500 bytes) 2600:8805:7f20:200:f0c0:8e63:4c48:70d3 --> 2001:1af8:4f00:a005:5::
1508 bytes from 2001:1af8:4f00:a005:5::, icmp_seq=0 hlim=51 time=98.541 ms
1508 bytes from 2001:1af8:4f00:a005:5::, icmp_seq=1 hlim=51 time=98.151 ms
1508 bytes from 2001:1af8:4f00:a005:5::, icmp_seq=2 hlim=51 time=99.409 ms
1508 bytes from 2001:1af8:4f00:a005:5::, icmp_seq=3 hlim=51 time=99.212 ms

--- pkg.opnsense.org ping6 statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 98.151/98.828/99.409/0.506 ms
root@OPNsense:~ #
root@OPNsense:~ #
root@OPNsense:~ # ping -c4 -s1500 mirror.dns-root.de
PING6(1548=40+8+1500 bytes) 2600:8805:7f20:200:f0c0:8e63:4c48:70d3 --> 2606:4700:3036::ac43:ce5d
1508 bytes from 2606:4700:3036::ac43:ce5d, icmp_seq=0 hlim=58 time=13.659 ms
1508 bytes from 2606:4700:3036::ac43:ce5d, icmp_seq=1 hlim=58 time=13.070 ms
1508 bytes from 2606:4700:3036::ac43:ce5d, icmp_seq=2 hlim=58 time=11.723 ms
1508 bytes from 2606:4700:3036::ac43:ce5d, icmp_seq=3 hlim=58 time=12.884 ms

--- mirror.dns-root.de ping6 statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 11.723/12.834/13.659/0.702 ms
root@OPNsense:~ #
root@OPNsense:~ # ping6 -c4 -s1500 mirror.dns-root.de
PING6(1548=40+8+1500 bytes) 2600:8805:7f20:200:f0c0:8e63:4c48:70d3 --> 2606:4700:3034::6815:16b3
1508 bytes from 2606:4700:3034::6815:16b3, icmp_seq=0 hlim=58 time=12.176 ms
1508 bytes from 2606:4700:3034::6815:16b3, icmp_seq=1 hlim=58 time=12.748 ms
1508 bytes from 2606:4700:3034::6815:16b3, icmp_seq=2 hlim=58 time=13.794 ms
1508 bytes from 2606:4700:3034::6815:16b3, icmp_seq=3 hlim=58 time=12.641 ms

--- mirror.dns-root.de ping6 statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 12.176/12.840/13.794/0.591 ms
root@OPNsense:~ #
root@OPNsense:~ #
root@OPNsense:~ # ping -4 -c4 -s1500 mirror.dns-root.de
PING mirror.dns-root.de (172.67.206.93): 1500 data bytes
1508 bytes from 172.67.206.93: icmp_seq=0 ttl=59 time=13.823 ms
1508 bytes from 172.67.206.93: icmp_seq=1 ttl=59 time=14.634 ms
1508 bytes from 172.67.206.93: icmp_seq=2 ttl=59 time=13.123 ms
1508 bytes from 172.67.206.93: icmp_seq=3 ttl=59 time=12.400 ms

--- mirror.dns-root.de ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 12.400/13.495/14.634/0.828 ms
root@OPNsense:~ #


----

My update issue occurs right after upgrading from latest community to dev build. It seems I can't even try to get on 22.1rc1 due to the issues with fetch and pkg after switching to dev.

On community, no issues with updates. If I roll back my VM, no issues.

It doesn't seem to matter which mirror I select. In fact, fetch seems to act up with any url I throw it, and it is not a DNS issue from what I can see.
#7
I experienced the same issue with updates in a VM but with physical nics passed through (no virtual nic). Does this rule out fragmenation due to running in a VM?
#8
I experienced the same issue with updates in a VM but with physical nics passed through (no virtual nic). Does this rule out fragmenation due to running in a VM?
#9
I'm chiming in to say I have seen similar issues. Running on proxmox, I can only route about 600 mbps in opnsense using virtio/vtnet. A related kernel process in opnsense shows 100% cpu usage and the underlying vhost process on the proxmox host is pegged as well.

Trying a Linux VM on the same segment (i.e. not routing the opnsense) saturates my 1gig nic on my desktop with only 25% cpu usage on the associated vhost process for the VMs nic.

I know some blame has been put on CPU speed/etc., but I think there is some sort of performance issue with the vtnet drivers. Even users of pfsense have had similar complaints. I also tried the new opnsense development build (freebsd 13) with no improvement.

I passed my nic through to the opnsense VM and reconfigured the interfaces and can route 1gbps no sweat. This is with the em driver (which supports my nic).

Note: I can get 1gbps with multiple queues set on the vtnet adapters for the opnsense VM. However, this still doesn't fix the performance issue with a single "stream."
#10
I think this issue should be resolved for opnsense via this commit (not in a released version yet).

https://github.com/opnsense/core/commit/7316071974b790b6262b65465649df61aa55c500
#11
That rule looks right. What do you mean by "use my Pihole server on my Home vlan"? Are you trying to force the clients to use the pihole server? Or just set the DNS servers for your guest machines to point to the pihole? You could probably set source to Guest Network.

Mine looks pretty similar. (attached screenshot)
#12
Quote from: franco on March 31, 2021, 08:56:58 AM
It's been reported previously. We would suggest taking this to ntopng since we can't do much about it.


Cheers,
Franco

Hi Franco,

Is it possible that something like this would help?

https://github.com/iamperson347/core/commit/cb75a0003389d6181403c0d10ef425423b6ce12f

I tested it on my machine and it seems to fix the issue. I don't know if this would have any ill effects, but shutting things down in the reverse of rcorder seems logical when the rc.freebsd file is called with "stop."

What do you think?

#13
It looks like someone reported it. We will see what ntopng devs have to say.

https://github.com/ntop/ntopng/issues/5127

Does the opnsense plugins build process generate the rc scripts, or does it come from the upstream package?
#14
It seems this may be related to ntopng and redis. When I manually ran /usr/local/etc/rc.reboot, it appears redis gets shutdown before ntopng. This causes ntopng to hang while shutting down.

I'll have to look into changing the order to see if that will fix the issue. I tried modifying the ntopng rc script to REQUIRE redis, but that didn't seem to help.
#15
Hi All,

I'm running into an odd issue where my install of opnsense will not reboot through the WebUI the first time. The dialog pops up that the reboot is happening, and the "shutdown chime" plays on the physical machine, but after a few seconds I get sent back to the dashboard. If I go to the reboot screen and select to reboot again, the system then properly reboots.

This also appears to affect reboots required/triggered by upgrades.

However, using "reboot" command via ssh shell session works fine.

I was trying to look through logs to see if I could determine what is going on but the system log doesn't really show anything at the time the issue occurs. The configd.log says "rebooting system" (or something to that effect), but that is it.

Any ideas on where else I can look to determine what might be holding things up?