Just starting down the vSphere road... I assume with a pass through PCIe card you also can't vMotion the VM to another host if the first host needs an update and reboot?
Now if your ESXi hosts all have an extra ethernet that is passed through (wan to switch to host ports), could you then vMotion the VM to another host in that group that have the pass through connections to the WAN?
1 Gbit/s virtualised is absolutely no big deal. I got that much on an Atom 3xxx with virtualised network adapters as well as with passthrough and OPNsense running on the FreeBSD bhyve Hypervisor. Great for us home lab folks.10 Gbit/s is a bit of a challenge even with dedicated hardware. It should be achievable if you have PCIe passthrough network cards. Do your servers have free PCIe slots? You could invest in a dual port Intel X520 or similar first and try that approach. If your hosts have spare CPU cycles and memory that is going to be a lot cheaper than dedicated appliances.Easier way of course is dedicated hardware. Depending on your environment (I assume business by the hints at two rather serious vSphere servers) that might be not a budget problem, then it might - I don't know.Mark that if you go virtualised plus PCIe passthrough, you cannot snapshot the VMs for a live backup and you cannot use dynamic memory allocation (ballooning). For a VM with a PCIe device memory is always fixed and snapshots disabled.HTH,Patrick