Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - RobLatour

#1
Thank you for your time and insights.  This is a well over my paid grade, so I really do appreciate your help.  I'll poke around a little more on this and may open it on github as potential bug.  However, for now I've got my program up and running, just had to assign a static ip address to the device running it. 

Thing is it took several days until I stumbled on this work work around.  However, perhaps someone else reading this thread in the future will be able to save some time because of it. 

Again, with thanks for your time and insights!
#2
Well, I change my project's code to reference only 1 NPT Server (as shown below)

configTime(GMT_OFFSET_SEC, DAY_LIGHT_OFFSET_SEC, NTP_SERVER1);

and point directly to the ip address of my esp32 time server used by OPNSense.

However, I got the same results:

If the client is assigned a dynamic IP address it does not get the time from the time server.

If the client is assigned a status IP address it does get the time from the time server.
#3
Well of course you are right! :-)

I had been thinking of my ntp server as the Network Time Service running on OPNSense - i.e. the ip address of the OPNSense box itself - which as I understand it intercepts calls to, for example, pool.ntp.org, and responds to the device itself.

#4
Hi,

Thanks for your comments.

Regarding: Obviously, you set the NTP servers in your code, so your client does not use DHCP assignments.

That's not exactly how it works, if you look at the example from my earlier post here: https://wokwi.com/projects/420011361310192641 you will notice that all that is done is a wifi connection.  The NTP request, travels over UDP, and I assume is generally broadcast and that is how the NTP sever pick up on the request.  I also assume that the NTP server gets the source IP address from the UPD packet and that is how it know to which device it needs to respond to (if it itself also doesn't do a general broadcast). In any case I am no expert on how NTP Servers work, but that is my educated guess as the client doesn't configure the NTP server's address.

Again, this works fine when the client is connected with a static address, but with a dynamic one.
#5
Thanks for your comments.

How do you configure your NTP client in each setup?

There are different ways to get NTP data for an esp32 device - but under the hood I suspect they all boil down to the same thing:
Here is some stub code I wrote for an emulator to see if I had the basics right (which I did):
https://wokwi.com/projects/420011361310192641

You can easily check with another client and/or by dumping DHCP requests and answers.
In my testing I set up Wireshark on another computer on the same interface, and the had it monitor for ntp requests.
Yesterday, When the esp32 device had a dynamically assigned address I could not see the ntp request via wireshark.
Also, yesterday, when the esp32 device had a statically assigned address I thought could see the ntp request via wireshark, but I just checked again and I didn't see it.

Not using Wireshark, is there a way in OPNSense to dump DHCP requests and answers?
#6
I don't know if this is an OPNSense bug or not - so I thought I would post here and get comments before reporting as a bug (if appropriate) on Github.


In short: NTP is not working for devices on my Wifi Mesh network that are assigned dynamic addresses, however it does work if I assign them a static address.

Question: is this an OPNSense bug or just the way things should work?


Additional background and detail:

I am running OPNSense (current community version) with various interfaces, including:
   LAN
   LAN_IOT
   Master_Clock

I have two separate physical Wi-Fi networks.  One is attached to LAN the other to LAN_IOT.
Both the Wi-Fi LAN and LAN_IOT are running in Access Point mode.
The Wi-Fi LAN device is a TP-Link Deco mesh system (with three TP-Link Deco connected in the Mesh). 
The Wi-Fi LAN_IOT is a legacy TP-Link Access Point with also connects to a legacy TP-Line Extender.

OPNSense is running the Network Time Service.

The Network Time Service runs on the Master_Clock interface.

The Network Time Service gets its time from an ESP32 - GPS based Stratum 1 Time Server I developed and released in 2023.

The Network Time Service delivers NTP date/time data to all other interfaces.

This setup has been working very well for me since 2023, however almost all my devices connected on all interfaces have been set up, via OPNSense, with static IP addresses.

Recently I have been programming another set of ESP32 projects and noticed that while there could connect to either the LAN Wi-Fi network or the LAN_IOT Wi-Fi network they would not retrieve the NTP date/time data.

I spent several days trying to resolve this; but most of this time was chasing down the rabbit hole that my ESP32 code was bad / bad board configurations / etc (non of which seemed to be the issue in the end).

Also, in debugging this, at one point I stopped the OPNSense Time Service, and with it stopped my other device could get NTP date/time data but my ESP32 projects could not.  So that was not it.

However, late yesterday I found that if in OPNSense I set the ESP32 project device up with a static IP address it was able to get the NTP date/time date just fine. However when it ran as with a dynamic IP address the problem returned.

In conclusion:

In any case, I am reporting it here. 

I don't know if its an issue with OPNsense, TP-Link, both. or simply how things work.

If it is an as of yet unknown/unreported issue with OPNSense, the above offers a work around.

Also, if the comments here so indicate, I will open it up as a bug on the OPNSense Github page.

With thanks.


#7
QuoteYou won't have been locked out. You disabled the DNS resolver so whatever URL you were using to access the web GUI could not be resolved.

You should have been able to access the web GUI using the firewall IP address to enable unbound again

Well I very much suspect you are right:

I was trying to access it via:

https://192.168.1.1

when I should have been using:

https://192.168.1.1:8443
#8
On my OPNSense system I use Unbound and Caddy.

With the help of Caddy, I access the OPNSense WebGUI from a PC on my LAN interface via https.

I was trying to get something else working today, and disabled Unbound.  Immediately upon doing that I was locked out of the OPNSense WebGUI.

I had to connect a keyboard and monitor to my OPNSense box and do a restore from a backup earlier in the day (when Unbound was enabled) to get WebGUI access back again.

Is this a known problem?
#9
Ok, I finally got ssl access to Home Assistant via my own domain name, CloudFlare, and the OPNSense Caddy plugin.  Here is how:

1. setup my domain dns, cloudflare and Caddy in the same was is in my previous post (directly above) for ha.example.com

2. created and installed a SSL a self signed certificate as detailed in this video:
    https://www.youtube.com/watch?v=d-CbVVxAHtI
    (on the Home Assistant box and on the local machine I wanted to use to access Home Assistant)
    (note: without the certificate I could still access portions of Home Assistant screens, however some key
    features like changing Home Assistant settings were blocked)

3. added the following to Home Assistant's configuration.yaml file:

http:
  ssl_certificate: /config/homeassistant.pem
  ssl_key: /config/homeassistant-key.pem
  server_port: 8123
  use_x_forwarded_for: true
  trusted_proxies:
    - 192.168.1.1
    - 172.30.33.0/24
    - ::1
    - 127.0.0.0



After that I could access Home Assistant with https://ha.example.com/lovelace/default_view

Thanks to all for their help!

#10
I've been using the OPNSense plugin for Caddy for a little while now, still working out some of the kinks on my system but today I ran into a new one.

I had a device referenced by https://pikvm.example.com which had been working well for at least a week now, maybe two.

However, this morning I could no longer access it via https://pikvm.example.com as I could as recently as yesterday.  Regardless, I could access via http://xxx.xxx.xxx.xxx (its IP v4 address).  Also, I could ping it just fine at its IP v4 address.

After some digging, I realized I did not have the device assigned to a static IP v4 address in OPNSense on the page Services: ISC DHCPv4: [LAN]

I assigned a static IP address to it, and vola it was working again.

Just thought I would share.
#11
Yes, that's all pretty much the way I figured it - just thought I'd ask incase there was something OPNSense could do - but obviously not.

#13
In OPNSense, on my LAN interface I have:

    IPv4 Configuration Type set to Static IPv4
    IPv4 Configuration Type set to None

However, from Windows, if I ping one of my devices within the LAN from another device in the LAN using the command as follows:

ping venus.local

I get an IP v6 address.

However, if I ping the same device with

ping -4 vensu.local

I get the expected IP v4 address.

I found this out the hard way, after something broke and it took me a couple hours to find the issue was this behaviour (i.e. venus.local reporting an IPv6 address rather than an IPv4 address).

I'm not sure, but I suspect the IP v6 address is being generated from a switch within the LAN interface. 

Is there anything that can be done in OPNSense to prevent this?

(I suspect no, but thought I would ask all the same).








#14
Thank you.

Well, I finally got it working using a domain and cloudflare for machines running opnsense itself, open media vault, pikvm, and bitwarden.  I don't yet have it working for home assistant, but will keep working on that.

Also, of note, at one point I was digging quite deep trying to figure out why it wasn't working and noticed one of my certificates was issue by Bitdefender which didn't look right as I thought it should be LetsEncrypt or ZeroSSL.  I worked for about an hour trying to figure out how that was happening (and how I might stop it) but do have Bitdefender running on my PC.  In any case, I then switched over to testing on a Raspberry Pi using Firefox where it said I was using a LetsEncrypt certificate.  However, it also said the Issuer country was "RU"?? 

On OPNSense I have geo-blocking for certain countries, including Russia.  I turned that off for a short time to see what would happen, and shortly after that I finally got my first caddy redirect to work. 

I did some more config changes and somehow I think I ended up regenerating new LetsEncrypt certificates, as when I checked them later they had a Issuer country of "US".   Now given my prior experience with duckdsn/duckdns it is possible I key fumbled something, however at this time I wasn't testing with duckdns domains any more.  Also, I found this - which may tie into the problems I was having?

ref: https://community.sophos.com/utm-firewall/f/general-discussion/146144/let-s-encrypt-renewal-no-longer-works-with-country-blocking

In any case, at this point, I have geoblocking back on and as mentioned above caddy is working as expected for most of my machines (just not Home Assistant).  Also it is working fine when I am browsing on my PC with Bitdefender.  My PC is still showing that the certificates are issued by Bitdefender while my Pi is showing them as issued by LetsEncrypt.  So I imagine BitDefender is doing some sort of MITM work that I'm just not going to worry about for now.

If I figure out the Home Assistant thing, I'll post back here.

Again, as always, thanks for your help.



#15
Quotewell you wrote duckDSN instead of duckDNS

Well @Monviech when you are right you are right, and that does take care of the scarry part - thank you!

However, sadly the important although somewhat more mundane part remains:
https://ha-example.duckdns.org
https://ha-example.duckdns.org:8123
https://ha-example.org
all result with

This page isn't working
ha-example.duckdns.org is currently unable to handle this request.
HTTP ERROR 502

Here is the log (debug on):




2024-09-27T16:36:36-04:00 Error caddy "error","ts":"2024-09-27T20:36:36Z","logger":"http.log.error","msg":"tls: first record does not look like a TLS handshake","request":{"remote_ip":"192.168.1.10","remote_port":"11483","client_ip":"192.168.1.10","proto":"HTTP/2.0","method":"GET","host":"ha-example.duckdns.org","uri":"/","headers":{"Sec-Ch-Ua":["\"Google Chrome\";v=\"129\", \"Not=A?Brand\";v=\"8\", \"Chromium\";v=\"129\""],"Sec-Ch-Ua-Platform":["\"Windows\""],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Dest":["document"],"Sec-Fetch-Site":["none"],"Sec-Fetch-Mode":["navigate"],"Accept-Language":["en-GB,en-US;q=0.9,en;q=0.8"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"],"Priority":["u=0, i"],"Sec-Ch-Ua-Mobile":["?0"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"],"Sec-Fetch-User":["?1"],"Accept-Encoding":["gzip, deflate, br, zstd"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"ha-example.duckdns.org"}},"duration":0.002232754,"status":502,"err_id":"ycczhrkwr","err_trace":"reverseproxy.statusError (reverseproxy.go:1269)"}

2024-09-27T16:36:36-04:00 Error caddy "debug","ts":"2024-09-27T20:36:36Z","logger":"http.handlers.reverse_proxy","msg":"upstream
roundtrip","upstream":"192.168.1.173:8123","duration":0.002111749,"request":{"remote_ip":"192.168.1.10","remote_port":"11483","client_ip":"192.168.1.10","proto":"HTTP/2.0","method":"GET","host":"ha-example.duckdns.org","uri":"/","headers":{"Sec-Fetch-Site":["none"],"Sec-Fetch-Mode":["navigate"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Sec-Ch-Ua-Platform":["\"Windows\""],"Accept-Language":["en-GB,en-US;q=0.9,en;q=0.8"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"],"Sec-Fetch-User":["?1"],"Sec-Ch-Ua":["\"Google Chrome\";v=\"129\", \"Not=A?Brand\";v=\"8\", \"Chromium\";v=\"129\""],"Sec-Fetch-Dest":["document"],"X-Forwarded-For":["192.168.1.10"],"X-Forwarded-Host":["ha-example.duckdns.org"],"X-Forwarded-Proto":["https"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"],"Priority":["u=0, i"],"Sec-Ch-Ua-Mobile":["?0"],"Upgrade-Insecure-Requests":["1"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"ha-example.duckdns.org"}},"error":"tls: first record does not look like a TLS handshake"}

2024-09-27T16:36:36-04:00 Debug caddy "debug","ts":"2024-09-27T20:36:36Z","logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"192.168.1.173:8123","total_upstreams":1}

2024-09-27T16:36:34-04:00 Error caddy "error","ts":"2024-09-27T20:36:34Z","logger":"http.log.error","msg":"tls: first record does not look like a TLS handshake","request":{"remote_ip":"192.168.1.10","remote_port":"11483","client_ip":"192.168.1.10","proto":"HTTP/2.0","method":"GET","host":"ha-example.duckdns.org","uri":"/","headers":{"Sec-Ch-Ua-Platform":["\"Windows\""],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"],"Sec-Fetch-User":["?1"],"Priority":["u=0, i"],"Sec-Ch-Ua-Mobile":["?0"],"Sec-Purpose":["prefetch;prerender"],"Purpose":["prefetch"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"],"Sec-Fetch-Mode":["navigate"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Dest":["document"],"Sec-Fetch-Site":["none"],"Accept-Language":["en-GB,en-US;q=0.9,en;q=0.8"],"Sec-Ch-Ua":["\"Google Chrome\";v=\"129\", \"Not=A?Brand\";v=\"8\", \"Chromium\";v=\"129\""]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"ha-example.duckdns.org"}},"duration":0.002067182,"status":502,"err_id":"rq8r5ftj6","err_trace":"reverseproxy.statusError (reverseproxy.go:1269)"}

2024-09-27T16:36:34-04:00 Error caddy "debug","ts":"2024-09-27T20:36:34Z","logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"192.168.1.173:8123","duration":0.001940032,"request":{"remote_ip":"192.168.1.10","remote_port":"11483","client_ip":"192.168.1.10","proto":"HTTP/2.0","method":"GET","host":"ha-example.duckdns.org","uri":"/","headers":{"Sec-Purpose":["prefetch;prerender"],"X-Forwarded-Proto":["https"],"Sec-Ch-Ua-Mobile":["?0"],"Sec-Ch-Ua-Platform":["\"Windows\""],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"],"X-Forwarded-Host":["ha-example.duckdns.org"],"Sec-Fetch-Dest":["document"],"Sec-Fetch-Site":["none"],"Accept-Language":["en-GB,en-US;q=0.9,en;q=0.8"],"X-Forwarded-For":["192.168.1.10"],"Sec-Ch-Ua":["\"Google Chrome\";v=\"129\", \"Not=A?Brand\";v=\"8\", \"Chromium\";v=\"129\""],"Sec-Fetch-User":["?1"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Upgrade-Insecure-Requests":["1"],"Purpose":["prefetch"],"Priority":["u=0, i"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"],"Sec-Fetch-Mode":["navigate"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"ha-example.duckdns.org"}},"error":"tls: first record does not look like a TLS handshake"}

2024-09-27T16:36:34-04:00 Debug caddy "debug","ts":"2024-09-27T20:36:34Z","logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"192.168.1.173:8123","total_upstreams":1}


192.168.1.1 correctly shows as the OPNSense box
192.168.1.10 correctly shows as the machine on which the request is made
192.168.1.173 correctly shows as the Home Assistant box

regarding "tls: first record does not look like a TLS handshake" I've tried it with and without the option 'TLS Insecure Skip Verify' checked.

on 192.168.1.10 if I ping ha-example.duckdns.org the ping works, and identifies my network's external IPV4 address correctly.