Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - tuto2

#1
Glad you like it :)

Cheers,
Stephan
#2
Hi all,

Work to include IPv6 support for the Captive Portal system has finished, if you'd like to give it a spin:

# opnsense-patch https://github.com/opnsense/core/commit/497ed54fe18c

The patch requires the 26.1.4 or 26.1.5 version and a reboot to take effect.

Some important notes:

- A new checkbox called "roaming" has been added and is enabled by default. This option allows the portal to sync/administrate IPv4/IPv6 client aliases. This option is required for maximum compatibility with IPv6, since multiple IPs are more common for IPv6.

- Hostwatch (Interfaces: Neighbors: Automatic Discovery) must be enabled for the administration of IPv6 addresses, as the output of NDP can be rather slow in some setups.

- A hostname must be configured for each zone wanting to do IPv6 (a certificate isn't required). Where IPv4 zone networks are usually static, IPv6 may be tracked through Identity Association or other means, in which case the portal cannot reliably guess which IPv6 address should be used for redirection. Instead, this is delegated to DNS. This also means that the proper DNS records must be available. For any default setup using Unbound, these can be synthesized with the DNS64 option in Services: Unbound DNS: General.

- The primary IP used by the client to log in to the portal will be allowed by the firewall as soon as they log in. All other IP addresses associated with this client are synchronized after the fact, meaning there can be a slight delay until these IP addresses are allowed by the firewall.

Thanks in advance if you'd like to test this, and a special mention to Alex Goodkind for his initial work and helpful testing.

Cheers,
Stephan

#3
26.1 Series / Re: FW live view not working regex
March 10, 2026, 01:40:27 PM
Hi,

I seem to have missed that regex was allowed in the old page. https://github.com/opnsense/core/commit/41664263de3f4fe211d0e7af9d0a471c300ceb21 Should address this.

# opnsense-patch 4166426

Cheers,
Stephan
#4
Quote from: OPNenthu on February 18, 2026, 09:45:53 PMHowever, if I resolve subdomains like 'www.facebook.com' these are not blocked:


The blocklists will consider a domain as a wildcard if the domain starts with "*." in the downloaded list. In all other cases it does an exact match.
#5
Hi there,

Do you have a complete Unbound log snippet before & after the 13:31pm failure? What is the size of the /var/unbound/data/unbound.duckdb file around the time of the failure & after? Any clients in the network doing suspicious amounts of DNS requests around the time of the failure (should show as a pretty big spike in the reporting section)? Enough free disk space left of the device?

Cheers,
Stephan

#7
Hi Toon,

This does indeed look suspicious. Perhaps it's best if you open a ticket on GitHub (https://github.com/opnsense/core/issues) with the relevant details you provided so we can track it more accurately.

Cheers,
Stephan
#8
From the console, can you share the output of "# pfctl -vs ether -a captiveportal_zone_0" when your client is connected? (sanitize IPs as needed)
#9
Well let me know if the issue persists on 25.7, and if so, how you tested it. Setting the idle timeout to 1 minute and leaving a ping running on the client should show quick enough if there is a problem.
#10
For context, I tested on 25.7 but nothing has changed on the Captive Portal side between these two versions.
#11
> There was a section where I had entered 10000 Mbp/s because there was no limit on the number of Mbp/s, and I had to retype this as 40000 Mbp/s or 4 Gbp/s, or use the command revert

Validations were added to the GUI to prevent entries above 4Gbps a long time ago, perhaps this configuration was from before this validation was added, and now crashes due to the backend service change.

Before this patch the shaping was fully handled by IPFW, so if it broke there, traffic would likely still pass through pf. However, since pf now handles this and crashes on the pipe configuration it's reasonable to assume this would lock up traffic as well. However, since it's an incorrect configuration in both scenarios the right thing to do is to ditch the 10Gbps pipes.
#12
A CARP VIP got deleted here while an IP alias was still present with the same VHID group, this should be a validation.

I'll add a patch that bails if no primary CARP is found, but your leftover IP alias should have its VHID group stripped as well.
#13
Hi,

The development documentation is not live yet, but you can find it here: https://github.com/opnsense/docs/tree/dashboard.

Cheers,
Stephan
#14
24.7, 24.10 Legacy Series / Re: New Dashboard
June 15, 2024, 01:48:17 PM
Quote from: Seimus on June 15, 2024, 12:06:15 AM
I did spin it as well in a VM, and I must say I am impressed, that new dashboard is fluid, also in regards of resources drain, I don't see anything significant.

Well done OPN devs.

Much appreciated! The design focuses on efficiency as much as possible in contrast to the old dashboard, which had a tendency to be slow on page load.

When you consider resource drain, there's always going to be a cost to data collection. Things are now better with the streaming implementations which prevent backend processes from having to start/stop all the time - reducing a lot of the overhead.

While the roadmap for the dashboard for 24.7 is set (some widgets have yet to be added), new ideas for widgets are welcome of course. For anyone willing to have a go at this themselves, the development documentation will be synced around the time of the 24.7 release.

Cheers,
Stephan
#15
24.7, 24.10 Legacy Series / Re: New Dashboard
June 14, 2024, 02:52:55 PM
Quote from: fabianodelg on June 14, 2024, 12:21:38 PM
I just hope that will not be 'heavy' on CPU like the current one....

Which CPU, client or firewall?

In general, CPU/GPU usage has increased somewhat on the client side (neat graphics aren't free), and reduced on the firewall side.

Cheers,
Stephan