Strange Intranet Routing

Started by scoobey, August 28, 2023, 08:26:03 AM

Previous topic - Next topic
I have a virtualized instance of OpnSense running on Proxmox with PCI Passthrough for all NICs in use.

WAN is currently a 10.0.0.x address
LAN is 10.1.1.x
PVT is 10.20.20.x

RFC1918 and Bogons disabled on all interfaces

Default rule from LAN was cloned to the PVT network and both networks can access the 10.0.0.X as well as the Internet on the other side.

Currently have multiple hosts on the LAN and two hosts on the PVT that have static IPs and Static ARP in the in DHCP server.

PVT Hosts .27, .28 and .30
24 hours ago .27 and .28 both accessible from the LAN
in the last 12 hours the .27 was removed and a new host with .30 was added to PVT
both of these hosts can connect to each other


From the LAN I can access the .30 server but not the .28 server.

I conducted tracert from the LAN and the .30 shows one hop via the 10.1.1.1 GW
tracert to the .28 shows hop to 10.1.1.1 but then times out until it reaches 30 hops

The first attempt to tracert the .28 host from the LAN showed
10.1.1.1
10.0.0.??
10.0.0.??

I forgot to write down the IPs and lost the results.


The only thing strange is that you never mentioned what "to access" refers to in this context. Some questions:

* Does it mean you have a server running or do you just try to PING?
* Did you check that netmask/prefix length is correctly set on all networks so that no overlap can occur?
* Did you create the appropriate rules from WAN/LAN/PVT to allow all connections?
* Did you enable firewall logging for all relevant rules to inspect what is going on / alternatively: wireshark sessions on all interfaces to trace the ICMP traffic?





August 29, 2023, 08:14:32 PM #2 Last Edit: August 29, 2023, 08:27:13 PM by scoobey
Thanks for the ideas tron80.

So I think i finally found the problem after reinstalling all systems...swapping out switches and NICS...

Pretty sure it was a faulty network cable. It worked intermittently then would fail at random times. Replacing the cable also improved overall throughput on the PVT network and reduced CPU spikes of 20% to 4-5%. All seems to be good now.

When I pinged the host with the bad cable and no connectivity firewall logs would show no traffic

After a few hours of operation that bad network cable would cause the entire PVT network to stop working even internally. I only noticed it because after it was in operation for over 12 hours it would not even maintain an constant link...would keep cycling on and off. Hopefully this resolves it.