Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - bringha

#16
Done
#19
All,

just updated to 23.7 without any problems, all services up again after about 12 min upgrade time, great work and and a big big thank you to the entire OPNsense team for another flawless major update. Great job.

One small cosmetic remark: The Wireguard widget on the dashboard seems not to be able to line break the public key resp. format the columns appropriately so the widget looks somewhat odd.

Another topic I would like to come back is the ddclient OPNsense backend and the extension of the supported standard service providers as eg proposed here:
https://forum.opnsense.org/index.php?topic=34388.0
I had the provider desec running stable  for a couple of weeks in 23.1.11 and I am wondering whether the expansion could find its way into mainstream or is there missing something? I changed the code for me also in 23.7 and it works also here.

Br br

#20
Na ja,

schnelles WAN ist nur eines von vielen Kriterien, die die Leistung der Hardware bestimmen sollten. Welche Services brauchst du/sollen laufen, wieviele interne Netzwerke soll die HW unterstützen, soll HA eine Rolle spielen. Ist eine Erweiterbarkeit (RAM, ...) wichtig. Energieeffizienz hatte du ja schon erwähnt

Die Protectli Vault HW ist in der Tat etwas schwachbrüstig bei hoher Bandbreite.

Ich habe auch mit einer kleinen Fertig HW angefangen, bin dann aber dazu übergegangen, mir die HW selber zusammenzukonfektionieren und bin dabei mit den Supermicro Embedded Motherboards sehr gut gefahren.

Aktuell benutze ich ein A2SDV-8C-LN8F Board  und ein A2SDV-8C-LN10PF mit jeweils 32 GB RAM in einem Rackgehäuse, das mag etwas überdimensioniert sein, hat aber ausreichend Flexibilität und Reserven auch für ZFS Installationen mit mehreren Snapshots etc.

Die sind robust, stabil und ich hatte bislang keinerlei Probleme damit. Für meine Anwendungsfälle liegt der Stromverbrauch deutlich unter einer Fritz!Box. Ist sicherlich nicht jedermanns Sache, aus meiner Sicht lohnt es sich aber, sich damit zu beschäftigen. Opnsense bietet soviele Möglichkeiten, die man oft erst im Laufe der Zeit für sich entdeckt und wenn dann die HW plötzlich der limitierende Faktor wird ist es einfach mühsam.

Br br
#21
Hi,

I could meanwhile reboot the machine and got a new set of ip addresses - the throttling does behave the same  as in the old version. I hope that did not misunderstood you and the revert is relevant for the ddclient backend only ?!?

@Franco: You furthermore mentioned that the new backend is not supporting to update both IPs in one request while ddclient itself does. However, it is a functionality which seem an increasing number of DNS service provider to require.

Is it possible to refactor the
_current_address=checkip(...

logic in the BaseAccount class in such a way that both addresses are made available as a property there, eg.  as a result of two subsequent calls of checkip()?  The addresses would then be available for the individual accounts code to be used. Or is the idea of the new backend architecture that such a case shall be handled directly with two subsequent checkip calls in the account code. Imho I would prefer the first solution.

Eg: I just going to try to create account code for Ionos and the only way to get both A and AAAA records is via an URL which look like:
https://ipv4.api.hosting.ionos.com/dns/v1/dyndns?q=NDFjZmM3YmVjYjQzNDRhMTkxMzliZDAwYzA2OGU3NzEuU2FvNlhuR2U4UmtxNGdiQzlMN19TLWpZanM4LWZBdGsxX2Ixa2FFUmRFWUp4Z1pmR3NWOVFpUjZYZGQ5TTZ5QjBIZkxSRFAyN2lzeHhCRWNuNVpSU0E&ipv4=<ipaddr>&ipv6=<ip6addr>

What is your view on this?

Looking forward to your reply

Br br
#22
Hi Franco,

I did not mention the steps which did NOT work. ;) Sorry for that one...

desec support recommends to use the update server https://update6.dedyn.io/ with the ipv4 address which shall (sometimes) enable that both (ipv4 and ipv6) addresses are updated. (if it doesn't they nicely refer to the 'folks of OPNsense who could help out').

I could not get it to work with none of the dyndns options on opnsense (legacy and both ddclient backends) yet. Even more any trial with split calls for ipv4 and ipv6 failed (which is also written in the desec docs). result has been either A or AAAA record but never both.

So it has been a very positive surprise that the new opnsense backend somehow manages to have an A and an AAAA record with two separate calls.

I reverted to the devel snapshot, so far all fine (I can just not force a reconnect as some videoacalls are going on but will do it asap and then feedback).

Br br
#23
Hi,

As a preparation for 23.7 and migrating from legacy dyndns to ddclient, I experimented today a bit around with both ddclient backends (ddclient and the new opnsense) and dyndns2 protocol. I am with desec and I brought it up and running with the ddclient backend and the config as described here

https://forum.opnsense.org/index.php?topic=26446.msg134975#msg134975

Basically it works, however every second update cycle, an update is said to be performed successfully which does not take place according to the desec DNS logs. ddclient logs look like this:

<29>1 2023-06-08T00:53:49+02:00 OPNsense.zuhause.xx ddclient[61106] 34054 - [meta sequenceId="3"] WARNING:  Wait at least 5 minutes between update attempts.
<29>1 2023-06-08T00:58:49+02:00 OPNsense.zuhause.xx ddclient[61106] 29212 - [meta sequenceId="1"] SUCCESS:  updating crandale.dedyn.io: good: IP address set to 87.XXX.XXX.140
<29>1 2023-06-08T01:03:49+02:00 OPNsense.zuhause.xx ddclient[61106] 50446 - [meta sequenceId="1"] WARNING:  skipping update of crandale.dedyn.io from <nothing> to 87.XXX.XXX.140.
<29>1 2023-06-08T01:03:49+02:00 OPNsense.zuhause.xx ddclient[61106] 50446 - [meta sequenceId="2"] WARNING:  last updated Thu Jun  8 00:58:49 2023 but last attempt on Thu Jun  8 00:58:49 2023 failed.

Could not yet find out why a SUCCESS for an update is noted in the logs which desec is not confirming.

I then tried the new python opnsense backend of ddclient and the result looks very encouraging:

I added simply two new lines into /usr/local/opnsense/scripts/ddclient/lib/account/dyndns2.py (line 37/38)


     35     _services = {
     36         'dyndns2': 'members.dyndns.org',
     37         'desec(v4)': 'update.dedyn.io',
     38         'desec(v6)': 'update6.dedyn.io',
     39         'dns-o-matic': 'updates.dnsomatic.com',


The configuration for desec and the opnsense backend look then like this:

- Services: Dynamic DNS: Settings: General Settings
Enabled [X]
Verbose [X]
Allow Ipv6 [X]
Interval [300]
Backend [OPNsense]

I added 2 services under the same desec account:

- Services: Dynamic DNS: Settings: Edit Account
Enabled [X]
Service [desec (v6)]
Protocol  [DynDNS2]
Username [Your Domain]
Password [Your DeSec Token]
Hostname(s) [Your Domain]
Check ip method [Interface [IPv6]]
Force SSL [X]
Interface to monitor [Your WAN Interface]

- Services: Dynamic DNS: Settings: Edit Account
Enabled [X]
Service [desec (v4)]
Protocol  [DynDNS2]
Username [Your Domain]
Password [Your DeSec Token]
Hostname(s) [Your Domain]
Check ip method [Interface [IPv4]]
Force SSL [X]
Interface to monitor [Your WAN Interface]

After activating, the ddclient logs look like

<165>1 2023-06-08T16:45:53+02:00 OPNsense.zuhause.xx ddclient 60835 - [meta sequenceId="4"] Account yyyyyyyyyy-18d2-47a7-b45a-4468975dc2e7 [desecv6 - dedyn]  set new ip 2003:XXXX:XXXX:XXXX:XXXX:efff:fe57:21ce [good]
<165>1 2023-06-08T16:45:53+02:00 OPNsense.zuhause.xx ddclient 60835 - [meta sequenceId="5"] Account yyyyyyyyy-18d2-47a7-b45a-4468975dc2e7 [desecv6 - dedyn]  changed
<165>1 2023-06-08T16:45:53+02:00 OPNsense.zuhause.xx ddclient 60835 - [meta sequenceId="6"] Account zzzzzzzzzz-f19d-4b4e-98a8-1bf71b62ee24 [desecv4 - dedyn]  execute
<163>1 2023-06-08T16:45:59+02:00 OPNsense.zuhause.xx ddclient 60835 - [meta sequenceId="7"] Account zzzzzzzzzz-f19d-4b4e-98a8-1bf71b62ee24 [desecv4 - dedyn]  failed to set new ip 87.XXX.XXX.236 [429 -
Request was throttled. Expected available in 55 seconds.]


After the mentioned 55sec, also the ipv4 address is visible at desec as an A record.

Means desec is bacically working on the new OPNsense backend for ipv4 AND ipv6 with some very simple and straight extensions to the dyndns.py code; only oddity is the throttling of the sequential request to the same desec account for v4 and v6 which allows obviously only one update per minute. Perhaps there is a possibility to add an additional throttling config item into the new opnsense backend code.

Several reboots and reconnects leading to different ipv4 and ipv6 addresses confirmed that it is working.

I think that this example could open potentially a pretty fast integration path for some more dyndns2 based service providers into the new opnsense backend python code and facilitate therewith at least in parts a catch up to the legacy dyndns solution as far as support of providers is concerned. Indeed there are many non dyndns2 providers for which more code needs to be written.

If this report is perceived positive perhaps it could be taken into the mainstream code base or you let me know how I could do this.

Br br
#24
22.1 Legacy Series / Re: os-ddclient
June 08, 2023, 10:49:16 PM
Hi,

As a preparation for 23.7 and migrating from legacy dyndns to ddclient, I experimented today a bit around with both ddclient backends (ddclient and the new opnsense) and dyndns2 protocol. I am with desec and I brought it up and running with the ddclient backend and the config as described here

https://forum.opnsense.org/index.php?topic=26446.msg134975#msg134975

Basically it works, however every second update cycle, an update is said to be performed successfully which does not take place according to the desec DNS logs. ddclient logs look like this:

<29>1 2023-06-08T00:53:49+02:00 OPNsense.zuhause.xx ddclient[61106] 34054 - [meta sequenceId="3"] WARNING:  Wait at least 5 minutes between update attempts.
<29>1 2023-06-08T00:58:49+02:00 OPNsense.zuhause.xx ddclient[61106] 29212 - [meta sequenceId="1"] SUCCESS:  updating crandale.dedyn.io: good: IP address set to 87.XXX.XXX.140
<29>1 2023-06-08T01:03:49+02:00 OPNsense.zuhause.xx ddclient[61106] 50446 - [meta sequenceId="1"] WARNING:  skipping update of crandale.dedyn.io from <nothing> to 87.XXX.XXX.140.
<29>1 2023-06-08T01:03:49+02:00 OPNsense.zuhause.xx ddclient[61106] 50446 - [meta sequenceId="2"] WARNING:  last updated Thu Jun  8 00:58:49 2023 but last attempt on Thu Jun  8 00:58:49 2023 failed.

Could not yet find out why a SUCCESS for an update is noted in the logs which desec is not confirming.

I then tried the new python opnsense backend of ddclient and the result looks very encouraging:

I added simply two new lines into /usr/local/opnsense/scripts/ddclient/lib/account/dyndns2.py (line 37/38)


     35     _services = {
     36         'dyndns2': 'members.dyndns.org',
     37         'desec(v4)': 'update.dedyn.io',
     38         'desec(v6)': 'update6.dedyn.io',
     39         'dns-o-matic': 'updates.dnsomatic.com',


The configuration for desec and the opnsense backend look then like this:

- Services: Dynamic DNS: Settings: General Settings
Enabled [X]
Verbose [X]
Allow Ipv6 [X]
Interval [300]
Backend [OPNsense]

I added 2 services under the same desec account:

- Services: Dynamic DNS: Settings: Edit Account
Enabled [X]
Service [desec (v6)]
Protocol  [DynDNS2]
Username [Your Domain]
Password [Your DeSec Token]
Hostname(s) [Your Domain]
Check ip method [Interface [IPv6]]
Force SSL [X]
Interface to monitor [Your WAN Interface]

- Services: Dynamic DNS: Settings: Edit Account
Enabled [X]
Service [desec (v4)]
Protocol  [DynDNS2]
Username [Your Domain]
Password [Your DeSec Token]
Hostname(s) [Your Domain]
Check ip method [Interface [IPv4]]
Force SSL [X]
Interface to monitor [Your WAN Interface]

After activating, the ddclient logs look like

<165>1 2023-06-08T16:45:53+02:00 OPNsense.zuhause.xx ddclient 60835 - [meta sequenceId="4"] Account yyyyyyyyyy-18d2-47a7-b45a-4468975dc2e7 [desecv6 - dedyn]  set new ip 2003:XXXX:XXXX:XXXX:XXXX:efff:fe57:21ce [good]
<165>1 2023-06-08T16:45:53+02:00 OPNsense.zuhause.xx ddclient 60835 - [meta sequenceId="5"] Account yyyyyyyyy-18d2-47a7-b45a-4468975dc2e7 [desecv6 - dedyn]  changed
<165>1 2023-06-08T16:45:53+02:00 OPNsense.zuhause.xx ddclient 60835 - [meta sequenceId="6"] Account zzzzzzzzzz-f19d-4b4e-98a8-1bf71b62ee24 [desecv4 - dedyn]  execute
<163>1 2023-06-08T16:45:59+02:00 OPNsense.zuhause.xx ddclient 60835 - [meta sequenceId="7"] Account zzzzzzzzzz-f19d-4b4e-98a8-1bf71b62ee24 [desecv4 - dedyn]  failed to set new ip 87.XXX.XXX.236 [429 -
Request was throttled. Expected available in 55 seconds.]


After the mentioned 55sec, also the ipv4 address is visible at desec as an A record.

Means desec is bacically working on the new OPNsense backend for ipv4 AND ipv6 with some very simple and straight extensions to the dyndns.py code; only oddity is the throttling of the sequential request to the same desec account for v4 and v6 which allows obviously only one update per minute. Perhaps there is a possibility to add an additional throttling config item into the new opnsense backend code.

Several reboots and reconnects leading to different ipv4 and ipv6 addresses confirmed that it is working.

I think that this example could open potentially a pretty fast integration path for some more dyndns2 based service providers into the new opnsense backend python code and facilitate therewith at least in parts a catch up to the legacy dyndns solution as far as support of providers is concerned. Indeed there are many non dyndns2 providers for which more code needs to be written.

If this report is perceived positive perhaps it could be taken into the mainstream code base or you let me know how I could do this.

Br br
#25
Hi all,

not sure whether my thing is related but this morning I found my opnsense in a kind of blocked state, last night at 3.40am it began to start countless processes of


/usr/local/bin/php /usr/local/etc/rc.newwanipv6 pppoe0 force


until 10am I had about 400 processes running with 100% CPU and 92% Memory load. Reboot failed, only a hard power cycle brought the system back.

It seems also be related to dhcp6c, in the system log I found every minute several of these logs

<13>1 2023-05-29T03:39:53+02:00 OPNsense.zuhause.xx dhcp6c 25660 - [meta sequenceId="1"] dhcp6c_script: REQUEST on pppoe0 executing
<13>1 2023-05-29T03:39:53+02:00 OPNsense.zuhause.xx dhcp6c 28830 - [meta sequenceId="2"] dhcp6c_script: REQUEST on pppoe0 renewal (REASON)
<13>1 2023-05-29T03:39:54+02:00 OPNsense.zuhause.xx opnsense 29301 - [meta sequenceId="3"] /usr/local/etc/rc.newwanipv6: IP renewal starting (new: 2003:XX:XXXX:XXXX
:3eec:efff:fe57:21ce, old: 2003:XX:XXXX:XXXX:3eec:efff:fe57:21ce, interface: WAN[wan], device: pppoe0, force: yes)
<13>1 2023-05-29T03:39:54+02:00 OPNsense.zuhause.xx opnsense 29301 - [meta sequenceId="4"] /usr/local/etc/rc.newwanipv6: plugins_configure dhcp (,inet6)
<13>1 2023-05-29T03:39:54+02:00 OPNsense.zuhause.xx opnsense 29301 - [meta sequenceId="5"] /usr/local/etc/rc.newwanipv6: plugins_configure dhcp (execute task : dhc
pd_dhcp_configure(,inet6))

The dhcpd logs show that it is tried to constantly start dhcpd although it is running already.

Race condition might be a reason, however this seems to be new since 23.1.8 (never seen this before)

Br br
#26
@Gromhelm That's interesting!

Exactly the other way round for my place:

Finally test-ipv6.com reports that I have ipv6 connectivity with 23.1.8 ... :o
Br br


#27
23.1 Legacy Series / Re: Problem with Airprint
April 03, 2023, 01:17:01 PM
Hmmmmm

what kinda compelling help do you expect to get for this kinda extremely detailed request ????

br br
#28
... as said in my 3rd paragraph - they are neither AND they have by far less functionality in almost all areas

The decision is to be made dependent from what your requirements are and what to balance out against what.

"Trouble free" is a big illusion - as simple as that

br br
#29
All,

I am getting meanwhile somewhat puzzled where this discussion shall lead to and what the expectation towards opnsense shall result in.

My view is:
Simply forget about that Opnsense can fix something which is more than obviously originated from the network side. There are myriads of reasons why reconnects are triggered from network side, and this even on several layers (DSL, PPPoE, ...(10(..)000x) ... (up to) even problems in your in-house cabling), and there are myriads of combinations thinkable that this happens when (one of) the sides are not in a well defined state where a golden rule can apply which fixes everything with a single finger tip. And even if we can find 10-ish issues where some workarounds on opnsense can circumnavigate some network issues, still myriads of reasons will stay.

Even for officially authorized router devices from Telekom, the fora are FULL of stories, reports, complaints, angryness around this Zwangstrennungs beast. Lets not try to chase a phantom on opnsense side which is not helping to get things improved.

The best proposal when experiencing such a thing has been meanwhile mentioned 100 times: Reconnect from client side once this situation occurs. Three layers to try:

1.) Reconnect on PPPoE - if fail
2.) Reconnect DSL including reconnecting PPPoE - if failing again
3.) Reboot opnsense and Modem (which includes again DSL and PPPoE reconnect, incl. ipv6)

If there was no physical line problem or a problem on a relevant device on network side out of service, I could reach with that a stable ipv6 connection again in 100% of the cases.

Just my 10 cents

Br br



#30
Hi there

@sbellon I have exactly the same set up as you have and still the good old description of

https://forum.opnsense.org/index.php?topic=21839.0

fits also for 23.1 in so far. From my view close to the "definitive" way you seem to look for.

However, "Zwangstrennung" due to the many different reasons it is carried out from network side is a somewhat different animal which is hard to be addressed from client side in all its (often unknown and intransparent) cases. Zwangstrennung is done for maintenance, new customers on the same line bundle and after-optimization, incident resolution, construction work, and many more. Btw. also all my fritzboxes lost ipv6 after Zwangstrennung pretty often.

I can confirm that I saw also in several old versions of Opnsense that IPv6 did not come up properly after Zwangstrennung. I could then recover ipv6 by manually storing the WAN config again and ipv6 was back. Other than that, playing around with the settings on the client side did not bring any improvement for the Zwangstrennung case.

I also did not see that I got the same ipv6 prefix after a Zwangstrennung that I had before - was always different. 

Br br