Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - jenswe

#1
Thx for your reply.

But I do not think this is the issue since I have verified that it works with an other router.

I did some packet captures in Wireshark and what I could see is that from within the pod it sets the tls version to be only 1.0. But if I run the a test from the host it offers tls 1.3.
It is all so weird.

I found an other guy on reddit with the exact same issue:

https://www.reddit.com/r/kubernetes/comments/1c6qab6/tls_handshake_failure_in_kubernetes_pod/?rdt=36094

Also using opnsense.

#2
Hi!

My first post here. :)

I have been using opnsense for 2 years now and I am very happy so far.

But now I have a problem I cannot fix by myself. So I hope the community can help me.

I have Opnsense setup with 5 Vlans and one of those I call Lab, in this Vlan I want to run a k3s cluster (Kubernetes). But I am running in to a very strange behavior.

When I try to call for example GitHub.com from inside a pod in the k3s cluster I get:


curl -v https://github.com
* Host github.com:443 was resolved.
* IPv6: 2a06:98c1:3120::1, 2a06:98c1:3121::1
* IPv4: 188.114.96.1, 188.114.97.1
*   Trying [2a06:98c1:3120::1]:443...
* Immediate connect fail for 2a06:98c1:3120::1: Network unreachable
*   Trying [2a06:98c1:3121::1]:443...
* Immediate connect fail for 2a06:98c1:3121::1: Network unreachable
*   Trying 188.114.96.1:443...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /cacert.pem
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS alert, handshake failure (552):
* OpenSSL/3.3.2: error:0A000410:SSL routines::ssl/tls alert handshake failure
* closing connection #0
curl: (35) OpenSSL/3.3.2: error:0A000410:SSL routines::ssl/tls alert handshake failure


So my first thought was that something was wrong with k3s. But after a lot of web searching I could not find any similar problems. So then I thought it might be a problem with the MTU settings. But changing it to be 1450 on both the opnsense vlan and the Nic on linux Kubernetes machine (also setting flannel to use 1450) did not solve the problem.

So I then as a test connected an old Asus router I had around and boom it worked. Egress from within the pod worked perfectly.

So I guess my question is what setting have I missed on my opnsense router? Has anyone ever run in to this issue?

Super happy to get some help!

Thx!