As I said... we run 10gbit/s with X710-T4 nics and they are copper.
Sorry, all my appliances have a sufficient number of onboard interfaces.At 40 £ I would just give it a go - you can always resell it on eBay.
The driver that FBSD sees is the hypervisor presented NIC.Not the actual one present. So if the problems are related to VmWare and Broadcom drivers, then change the NIC.Otherwise the issue is elsewhere.I have 50+ Pro1000 quad cards lying around if you need one.... they are from our old servers.
What is the measure of "performance is not great"? I am not disputing it, but it would be good to measurably understand the baseline and setup.OP I read "All the linux VM's in the host run at wire speed and across the vswitch approaching 10G but the OpnSense VM is not great performance-wise." wich gives an idea but, what is the setup to improve?Is it VM to VM on the same host, is there routing involved, where and how? Where does PPoE come into play?With virtualised setups, there are so many variables that can be at play, maybe it is just me but I don't see the setup in my mind yet.
Ok so iperf with OPN as server and measuring it from a client to it, not accross it, you get 0.6 Gbps.On other VMs the same test is almost 10 Gbps.Got it. I still don't get where PPoE fits but you see now where I'm going. It is very important to describe the issue and how's been measured. Good luck.
Does anyone have any experience of ESXI 6.7 and either of those cards with OpnSense they can share?