Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Sunshine

#1
Quote from: viragomann on February 21, 2025, 11:13:17 PM
Quote from: Sunshine on February 21, 2025, 10:42:52 PMOPNsense is able to ping proxmox though and appears to be giving it the appropriate reserved address.
The Proxmox host in question is getting its IP from the DHCP server on OPNsense?
Otherwise check its network settings.

Yes, I believe it is. It has a reservation outside of the dynamic range. When I plug a monitor into the machine in question, it's prompting to connect and configure at 192.168.1.10 as I would expect.
I am just noticing that status icon is indicating offline, yet the vms on the same bridge are showing online.

#2
I'm having trouble accessing a proxmox interface on a machine that is different what OPNsense is running on. I can however, connect to the VMs that are on this particular machine. Probably best described with the pictures. The clients can connect to the green lines, but not the red.
OPNsense is able to ping proxmox though and appears to be giving it the appropriate reserved address.

The only thing I can think if is that both proxmox instances were set up with the default hostname of "pve", and some searching is implying that changing one is not a trivial task.

It seems like a proxmox issue, but this only occurs when running OPNsense. If I spin up a different firewall, I can access everything without issue. I recall running OPNsense about 5 or 6 years ago on this same hardware config and never had issues, so i'm assuming it's a simple setting I'm overlooking.

-----------------

Changing the hostname did not fix this issue. I discovered that the problematic proxmox node was configured as 192.168.1.10/32. Changing it to /24 subnet (the same as the node OPNsense is on) seems to have solved it, and I can now access the gui. The previous version of OPNsense and a few other firewalls I've used all seem to be fine with /32 though.


#3
I got sidetracked but finally looping back. I was able to improve performance with both interfaces passed through, and I'm getting 960+ down both OPNsense (out of the box config) and untangle.
Unfortunately too long had passed and I forgot where I was, so I just started over from scratch. I suspect the problem was an incorrect passthrough setup.
#4
A bit more tinkering. I ended up passing through 2 of the interfaces, which was a bit of an adventure, but ultimately ended up with the same performance.
Switching back to untangle, I noticed it has slowed down and traced it to the interfaces falling back to half duplex. I corrected that and untangle was back up to speed. That got me looking at opnsense interface settings.

It looks like the LAN is properly autodetecting 1G full. Changing the setting to 1000baseT-full makes no difference.
On the WAN side I see there is no place to set it in the GUI. OPNsense has it detected as:
  • Media   10Gbase-T <full-duplex>
  • Media (Raw)   Ethernet autoselect (10Gbase-T <full-duplex>)

But it's only a 1G Intel interface. Any idea if this could be causing trouble, or is it okay that it's over-spec'd?
#5
I've been through most of the speed threads here and think I've tried everything mentioned, but having switched to OPNsense from Untangle, I'm getting marginally slower downloads on otherwise identical VMs.
My hardware is old, but relatively common so I don't think it has any major quirks to work around. I'm running proxmox on a Qotom PC with 4 intel 1G nics.
VMs are identical with virtio bridges, and the i5 CPU passed as 'host'. It's been pretty solid for about 3-4 years on Untangle, but the licensing has changed and I'm shopping alternatives now. 

I'm running speed tests on a LAN machine. My ISP service is 1G down / 40M up.
I believe all hardware offload is disabled. My OPNsense install is fresh and nearly default settings.
During tests, cpu report in the OPNsense dash never report peaks higher than 50% and it mostly just runs under 10%. Proxmox reports a bit higher, but nothing I see that's concerning.

With untangle I get:
  • directly on VM: untested
  • LAN client: 850-900 down

With OPNsense:
  • directly on VM: 920 down
  • LAN client /OPNsense no add ins: 630-690 down
  • LAN client w/ wireguard enabled: 530 down
  • LAN client w/ wireguard and ips: 430 down
  • LAN client w/ zenarmor: 380 down --edit, just added

It seems the LAN interface is the bottleneck, but I'm just a hobbyist so don't want to jump to conclusions. So far the only change I've tried is setting "generic-receive-offload off" in proxmox, but I'm not seeing much change.
I can live with some performance hit on the bridges, but I plan to add more services like zenarmor and am concerned that it will get more sluggish.

Any suggestions for things I've missed?