Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - eakteam

#1
Hi everyone. I am struggling setting up a working IPv6 network for VMs connected at OPNsense LAN interface.
The provider (Hetzner) has given me a /64 IPv6 network:  2a01:4f8:****:****::/64
Somehow I am able to create a working IPv6 network for Proxmox itself and OPNsense with below configurations:
Proxmox (Debian) /etc/network/interfaces:

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp0s31f6
iface enp0s31f6 inet static
        address 94.130.***.***/26
        gateway 94.130.***.***
        up route add -net 94.130.***.*** netmask 255.255.255.192 gw 94.130.***.*** dev enp0s31f6
       # route 94.130.***.***/26 via 94.130.***.***

iface enp0s31f6 inet6 static
        address 2a01:4f8:****:****:aaaa::11/128
        gateway fe80::1

auto vmbr0
iface vmbr0 inet static
        address 10.10.10.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o enp0s31f6 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o enp0s31f6 -j MASQUERADE
        post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

iface vmbr0 inet6 static
        address 2a01:4f8:****:****:aaaa::1336/127
        up ip route add 2a01:4f8:****:****::/64 via 2a01:4f8:****:****:abcd::1337

auto vmbr1
iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0

iface vmbr1 inet6 manual


OK, now the hard part:

Ping from Proxmox to OPNSense works OK
Ping from OPNSense to Proxmox works OK
Ping from Proxmox to google: ping6 google.com is OK
Ping from OPNSense to google: ping6 google.com is OK
Ping from VM to OPNSense (2a01:4f8:****:****:aaaa::1337) works OK
Ping from VM to Gateway (2a01:4f8:****:****:aaaa::1336) FAILS
Ping from VMs to Google (ping6 google.com) FAILS

In OPNSense, at  IPv6 Configuration Type(LAN), choosed Static IPv6 with the following values: 2a01:4f8:****:****:0172:0016:0:0001/125
In OPNSense, at ISC DHCPv6: [LAN], enabled the services and added to Range from: 2a01:4f8:****:****:172:16:0:1 -> to 2a01:4f8:****:****:172:16:0:7
In OPNSense, at Router Advertisements: [LAN], choosed Managed and DNS Servers as following:
2001:4860:4860::8888
2001:4860:4860::4444

When trying to boot a VM e.g. in my case 1 Windows and 1 Ubuntu, they are getting an IPv6 address from DHCPv6 from OPNSense but cannot access or resolve the IPv6 adresses.

Ping shows current values:

ping6 google.com
PING google.com(fra24s05-in-x0e.1e100.net (2a00:1450:4001:828::200e)) 56 data bytes
--- google.com ping statistics ---
33 packets transmitted, 0 received, 100% packet loss, time 32747ms


What I am missing? Really getting tired for days with trying different configurations, but doesn't work. If anybody can assist I really appreciate that a looooot.
#2
General Discussion / HAProxy cannt access stats page
March 11, 2024, 11:33:12 AM
Hi, I'm stuggling to be able to access HAProxy stats page and can't make it works.

This is the config for it:

#
# Automatically generated configuration.
# Do not edit this file manually.
#

global
    uid                         80
    gid                         80
    chroot                      /var/haproxy
    daemon
    stats                       socket /var/run/haproxy.socket group proxy mode 775 level admin
    nbthread                    2
    hard-stop-after             60s
    no strict-limits
    maxconn                     10000
    httpclient.resolvers.prefer   ipv4
    tune.ssl.default-dh-param   2048
    spread-checks               2
    tune.bufsize                16384
    tune.lua.maxmem             0
    log                         /var/run/log local0 info
    lua-prepend-path            /tmp/haproxy/lua/?.lua
cache opnsense-haproxy-cache
    total-max-size 4
    max-age 60
    process-vary off

defaults
    log     global
    option redispatch -1
    maxconn 10000
    timeout client 30s
    timeout connect 30s
    timeout server 30s
    retries 3
    default-server init-addr last,libc
    default-server maxconn 10000

# autogenerated entries for ACLs


# autogenerated entries for config in backends/frontends

# autogenerated entries for stats




# Frontend: HTTP (HTTP Frontend)
frontend HTTP
    bind *:80 name *:80
    mode http
    option http-keep-alive

    # logging options
    # ACL: Mail_Subdomains
    acl acl_65eec1ca389293.65127883 hdr_beg(host) -i mail.
    # ACL: Server_Subdomain
    acl acl_65eec21f741e58.90038437 hdr_beg(host) -i server.

    # ACTION: Redirect_Mail_Server
    use_backend Mailcow if acl_65eec1ca389293.65127883 || acl_65eec21f741e58.90038437

# Backend: Mailcow (Mailcow Mail Server)
backend Mailcow
    # health checking is DISABLED
    mode http
    balance source
    # stickiness
    stick-table type ip size 50k expire 30m 
    stick on src
    http-reuse safe
    option forwarded
    option forwardfor
    server Mailcow 172.16.0.3:80

listen local_statistics
    bind            127.0.0.1:8822
    mode            http
    stats uri       /haproxy?stats
    stats realm     HAProxy\ statistics
    stats admin     if TRUE

listen  remote_statistics
    bind            172.16.0.1:8822
    mode            http
    stats uri       /haproxy?stats
    stats hide-version


But it cannot be accessed from LAN via OPNSense from its IP address
#3
Hello everyone, i am new to HAProxy and struggling for more than 3 days to make it works but unfortunately nothing achieved.

So i short words trying to achieve this kind of logic:

Dedicated Server (Proxmox VE+ 1 Public IP) -> (NAT) OPNsense + HAProxy -> Other VMs connected to OPNsense LAN interface.

> The configuration of Proxmox Server is as the following:

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto enp0s31f6
iface enp0s31f6 inet static
        address 94.130.x.x/26
        gateway 94.130..x.x

auto vmbr0
iface vmbr0 inet static
        address 10.10.10.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o enp0s31f6 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o enp0s31f6 -j MASQUERADE
        post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

auto vmbr1
iface vmbr1 inet static
        address 172.16.0.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0


Ok, so created new VM(OPNsense), install and configure it as following:

WAN -> vtnet0 (bridge to vmbr0 at Proxmox Server)
LAN -> vtnet1 (brigde to vmbr1 at Proxmox Server)

WAN configured with 10.10.10.10/24
LAN configured with 172.16.0.1/24 DHCP(yes) Range: 172.16.0.2-172.16.0.254

> Now the servers part:


  • VM 1

VM(Ubuntu Server) with OpenLiteSpeed Web Server running (example.com) and Postfix/Dovecot for email purposes and connected to vmbr1 (LAN of OPNsense connected to Proxmox vtnet1)
The Ubuntu server get the IP successfully via OPNsense as following -> IP 172.16.0.2 , Gateway 172.16.0.1.


  • VM2

VM(Ubuntu Server) with OpenLiteSpeed Web Server running (anotherexample.com) and Postfix/Dovecot for email purposes and connected to vmbr1 (LAN of OPNsense connected to Proxmox vtnet1)
The Ubuntu server get the IP successfully via OPNsense as following -> IP 172.16.0.3 , Gateway 172.16.0.1.

Both of the VMs connected through OPNsense LAN and able to communicate with public internet successfuly.

QuoteOK now the hard part  :):

CloudFlare DNS for example.com:

A Record example.com pointing to Public IP of Proxmox Server -> 94.130.x.x

Created some iptables rules to communicate from Public IP to local OPNsense and HAProxy:

For OPNsense:
iptables -t nat -A PREROUTING -p tcp --dport 10443 -j DNAT --to-destination 10.10.10.10:10443

HAProxy configuration:


#
# Automatically generated configuration.
# Do not edit this file manually.
#

global
    uid                         80
    gid                         80
    chroot                      /var/haproxy
    daemon
    stats                       socket /var/run/haproxy.socket group proxy mode 775 level admin
    nbthread                    1
    hard-stop-after             60s
    no strict-limits
    tune.ssl.default-dh-param   2048
    spread-checks               2
    tune.bufsize                16384
    tune.lua.maxmem             0
    log                         /var/run/log local0 info
    lua-prepend-path            /tmp/haproxy/lua/?.lua

defaults
    log     global
    option redispatch -1
    timeout client 30s
    timeout connect 30s
    timeout server 30s
    retries 3
    default-server init-addr last,libc

# autogenerated entries for ACLs


# autogenerated entries for config in backends/frontends

# autogenerated entries for stats




# Frontend: Public_Facing_Pool ()
frontend Public_Facing_Pool
    bind *:443 name *:443  proto h2
    bind *:80 name *:80  proto h2
    mode http
    option http-keep-alive
    maxconn 500

    # logging options
    # ACL: Web-Server
    acl acl_65baf2832edf80.37086579 hdr_beg(host) -i example.com

    # ACL: Web-Server1
    acl acl_66baf2832edf80.37086579 hdr_beg(host) -i anotherexample.com

    # ACTION: Web-Server
    use_backend Web-Server if acl_65baf2832edf80.37086579

    # ACTION: Web-Server
    use_backend Web-Server1 if acl_66baf2832edf80.37086579

# Backend: Web-Server ()
backend Web-Server
    # health checking is DISABLED
    mode http
    balance roundrobin

    http-reuse safe
    server Web-Server 172.16.0.2:443

# Backend: Web-Server1 ()
backend Web-Server
    # health checking is DISABLED
    mode http
    balance roundrobin

    http-reuse safe
    server Web-Server 172.16.0.3:443

# Backend: acme_challenge_backend (Added by ACME Client plugin)
backend acme_challenge_backend
    # health checking is DISABLED
    mode http
    balance source
    # stickiness
    stick-table type ip size 50k expire 30m 
    stick on src
    http-reuse safe
    server acme_challenge_host 127.0.0.1:43580



# statistics are DISABLED



Trying to open in browser example.com or anotherexample.com it fails to open.

QuotePlease anybody can help to achieve that since it is very important for me and I don't know anymore what to do, coming around to this more than 3 days for hours and hours. I don't know if something wrong with it or lack of my knowledge.
#4
Hello everyone. I'm struggling to setup a working network into my cloud services.

The setup is as the following:

Dedicated Server (1 Public IP) -> Proxmox ->(NAT) OPNsense -> Other VMs connected to lan

In Proxmox i have the following configurations at /etc/network/interfaces:

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto enp0s31f6
iface enp0s31f6 inet static
        address 94.130.x.x/26
        gateway 94.130..x.x

auto vmbr0
iface vmbr0 inet static
        address 10.10.10.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o enp0s31f6 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o enp0s31f6 -j MASQUERADE
        post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

auto vmbr1
iface vmbr1 inet static
        address 172.16.0.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0


Ok, so created new VM(OPNsense), install and configure it as following:

WAN -> vtnet0 (bridge to vmbr0 at Proxmox Server)
LAN -> vtnet1 (brigde to vmbr1 at Proxmox Server)

WAN configured with 10.10.10.2/24
LAN configured with 172.16.0.1/24 DHCP(yes) Range: 172.16.0.2-172.16.0.254

After that created another VM(Ubuntu) and connected to vmbr1 (LAN of OPNsense connected to Proxmox vtnet1)
The client get the IP successfully via OPNsense DHCP as following -> IP 172.16.0.2 , Gateway 172.16.0.1, DNS 172.16.0.1

But this client cannot access internet or even OPNsense GUI from there.

I can ping from OPNsense shell client IP 172.16.0.2, also can ping google.com or 8.8.8.8
From client i can ping 172.16.0.1 but not google.com or 8.8.8.8
Also can't open OPNsense GUI from client via 172.16.0.1

The output of cat /etc/resolv.conf from OPNsense shell is like following:

domain localdomain
nameserver 172.16.0.1
nameserver 10.10.10.1
search localdomain


What i am doing wrong? Spent more than 1 day to figure it out but nothing helped.