Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - spark5

#1
hi, no, i was testing with an opnsense live iso and configured only network interfaces, set firewall allow and started iperf.
this is so strange ... i would like to try a plain freebsd, but this could be an answer, but no solution.

thanks for help

ronny

Quote from: tverweij on October 12, 2023, 03:50:02 PM
Are you using IPS scanning?
#2
hi all ... i searched a lot, but did not find a solution or explanation.
hope somebody can help me.

actually we are running a couple of opnsense cluster on proxmox ve with a some vlans.
we found out, that our bandwidth through the firewall is near 3 gbit/s (all 10gbit/s interfaces).

now, i was starting to test something (all iperf default settings):
2 linux vms, iperf traffic routed via opnsense vs. linux router

# linux vm via proxmox
## vlan config in proxmox, interface via vlan tag
19-22 gbit/s
## vlan in vm, vlan aware bridge
12-13 gbit/s

# opnsense 22.7 vm via proxmox
## vlan config in proxmox, interface via vlan tag
4 gbit/s
## vlan in vm, vlan aware bridge
3 gbit/s (2.8)

# opnsense 22.7 bare metal without virtualisation
6 gbit/s

# opnsense 23.7 bare metal without virtualisation
6 gbits/s

# linux bare metal without virtualisation
10 gbit/s

i'm confused and i dont know whats wrong there.
the hardware is: Intel Corporation Ethernet Controller 10G X550T

if you need some more info, ask me.

thank you all for helping us out-

kind regards,
ronny
#3
hi, we are running in the same problem.

is it possible, to have more than 255 virtual carp interfaces?

thanks a lot for help ...

kind regards,
ronny
#4
hi ... sounds, like there is something other ... but ok

first, look at Interfaces: Settings, the first 3 boxes
Hardware CRC    Disable hardware checksum offload
Hardware TSO    Disable hardware TCP segmentation offload
Hardware LRO    Disable hardware large receive offload

try to disable this. i had a lot of trouble with bgp in the past, using tso.
there is also something with the realtek nic, but not in proxmox setup.

actually, i have no idea

what kind of network card you chosse, virtio?
you do not use bridges? how is your setup looking?

ronny
#5
hi ... again, i found a solution in the miracles of the release notes.

o Media settings are no longer shown for non-parent interfaces and need to be set individually to take effect.  This can introduce unwanted configuration due to previous side effects in the code.  If the parent interface was not previously assigned please assign it to reapply the required media settings.

and on the upgrade screen
... Media and hardware offload settings are no longer shown for non-parent interfaces ...

that means, that the hardware offload engines are enabled, until you enable the "parent" interface.

i our example:
parent - vtnet1
vlan1 - vtnet1_vlan16
vlan2 - vtnet1_vlan20

the vlan1 and vlan2 are assigned, enabled and had problems.
first, i assigned the interface vtnet1 and leave it disabled.
nothing happens.
after this, i enabled the vtnet1 interface, with ip type static.

and after this ... tada, tcp is working fine.

hope, i could help someone.

kind regards,
ronny

#6
hi guys,

we run in the same problem. icmp is working, tcp not.
i captured with tcpdump and saw the syn, syn-ack packets.
after this, the sender sends out retransmission packets.

something stuck with the last handshake part of tcp.

we can also send some informations for troubleshooting.

thanks for help, we do not find any solutions at this time.

kind regards,
ronny
#7
General Discussion / Re: openvpn with wan failover
January 12, 2021, 05:17:12 PM
hi, sorry for beeing so late

we find an solution and setup two openvpn server with the same ca.
the problem is not the wan failover. the problem comes from openvpn. the answered packages has the wrong src ip.
so, we cant use listen on any here.

kind regards,
ronny
#8
hi all,

refers to: https://forum.opnsense.org/index.php?topic=5293.0 i'm running in the same problem.
we use proxmox with the latest version on opnsense.

if the backup node answers the dhcp request, the hostname on the master is empty and also the dns lookup.

has someone any ideas, or did i made something wrong?

thanks and kind regards,
ronny
#9
General Discussion / Re: openvpn with wan failover
August 25, 2020, 02:22:44 PM
nobody an idea?

should these packets not routed via default gateway?
what is bsd doing other here?

thanks
#10
General Discussion / openvpn with wan failover
August 25, 2020, 09:03:46 AM
hi, i have a strange problem.

we have 2 wan links with gateway group and failover only, no load balancing.

our vpn client config has 2 remote server. one from the 2 wan links.
the vpn server must listen on any interface.

the client should connect the first ip. if this wan link goes down, the gateway will failover (it does).
after that the client should connect two the second ip, from the second wan link.

up to this, everything is working fine.

but, if the first link came back, the vpn traffic stays always on the second wan link.
if i reconnect the vpn client, the connection comes through the first wan link, but is answered via the second wan link.
the default route points to the first.
if i restart the openvpn server, everything is working again.

i had this setup tested, before upgrading to 20.1. this was working.
i dont know, what is happen.

from point of routing, the traffic should always run to the default gateway.

does someone have an idea?

thanks a lot and kind regards,
ronny
#11
hi, hope to be here right.
i'm testing an opnsense cluster on proxmox. everything is working fine.

now i saw, that during backup (snapshot), that the cluster was switching.
proxmox did a short freeze to clear all the caches and so on.

does anybody here have any experiences with opnsense backup on proxmox?
is it better to take a snapshot and save the config.xml regularly?

thanks for help and kind regards,
ronny
#12
@hbc: thank you ... will have a look

now i'm interested, if i understood something wrong with mdns on cluster, or if this setup has a problem.
#13
nobody any idea?

is it possible to start an service via carp ha? so that this service is only running on the master node.

kind regards,
ronny
#14
hi,
i have a strange problem with mdns.
we have  configured an opnsense cluster with multiple carp ips.
on 5 interface, we need mdns.

now, is see an client asking for mdns records.
after that, there is so much mdns traffic, like flooding/looping.

i think, the node1 gets the traffic and route it to the configured networks. now node2 gets also this traffic and doing the same.
so, what i see, is this here in tcpdump:

10:41:56.065513 IP 192.168.40.103.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.065526 IP 10.40.0.2.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.065588 IP 192.168.40.102.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.065647 IP 192.168.40.103.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.065721 IP 10.40.0.2.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.065777 IP 192.168.40.102.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.065880 IP 192.168.40.102.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.065888 IP 10.40.0.3.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.066001 IP 192.168.40.102.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.066010 IP 192.168.40.103.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.066093 IP 10.40.0.3.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)
10:41:56.066185 IP 10.40.0.2.5353 > 224.0.0.251.5353: 0 PTR (QM)? _services._dns-sd._udp.local. (46)

10.40.0.2,3 and 192.168.40.102,3 are the two firewall nodes in seperate networks.

if i stop the mdns on node 2, the loop traffic stops and everything is working fine.

so, i think, the mdns should be run as an cluster service, only active on the master node.

am i wrong? i did not found anything in the forum.
can somebody help me please?

thanks and kind regards,
ronny
#15
ok, you should put this into the release note

thanks