Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - sparticle

#46
Quote from: cookiemonster on November 29, 2022, 04:05:59 PM
Ok so iperf with OPN as server and measuring it from a client to it, not accross it, you get 0.6 Gbps.
On other VMs the same test is almost 10 Gbps.
Got it. I still don't get where PPoE fits but you see now where I'm going. It is very important to describe the issue and how's been measured. Good luck.

WAN side is PPPoE and would of course have the same reduced performance but as we have <100mb connection not an issue.
#47
Quote from: cookiemonster on November 29, 2022, 12:50:21 PM
What is the measure of "performance is not great"? I am not disputing it, but it would be good to measurably understand the baseline and setup.
OP I read "All the linux VM's in the host run at wire speed and across the vswitch approaching 10G but the OpnSense VM is not great performance-wise." wich gives an idea but, what is the setup to improve?
Is it VM to VM on the same host, is there routing involved, where and how? Where does PPoE come into play?
With virtualised setups, there are so many variables that can be at play, maybe it is just me but I don't see the setup in my mind yet.

I get variable performance across the lan to the OpnSense VM using iperf maxes out at around .6G. As I said all the other VM's stting on the same vswitch run at wire speed all are using the same broadcom physical Quad port NIC.

I have even put the OpnSense LAN on a seperate PG.

There is obviously some issue with the vmx divers. There are many many bug and issue reports to FreeBSD regarding VM performance with teh VMXNET3(x) drivers in ESXI.

Cheers
Spart

Cheers
Spart

#48
Quote from: Supermule on November 29, 2022, 11:22:00 AM
The driver that FBSD sees is the hypervisor presented NIC.

Not the actual one present. So if the problems are related to VmWare and Broadcom drivers, then change the NIC.

Otherwise the issue is elsewhere.

I have 50+ Pro1000 quad cards lying around if you need one.... they are from our old servers.

Hey thanks.

I have not issues with network performance on any of the other ESXI VM's they are a combination of Linux and Windows.

But performance on OpnSense/FreeBSD is not great.

I am questioning my choice on replacing the broadcom daughter board with the I350 based one. I have 6 PCI-E 3 slots free 3 full and 3 half height.

What model of PRO1000 are the cards?

I could pop one in and dedicate that to OpnSense.

Cheers
Spart
#49
Quote from: pmhausen on November 28, 2022, 08:32:47 PM
Sorry, all my appliances have a sufficient number of onboard interfaces.

At 40 £ I would just give it a go - you can always resell it on eBay.

Lots more reading since.

Looks like the R720 can take a daughter card it currently has the Broadcom NetXtreme 5720 based quad port adaptor in it. But can be replaced with an Intel I350 Quad port adaptor.

Shows as fully supported in the latest 13.1 HCL. The BC card is not on the list so maybe that is part of the issue.

I have ordered one and will fit it on the next maintenance reboot of the ESXI server. I will then need to re-config networking on the OpnSense VM as when it boots and now sees an e1000e and not VMXNET3 I will need to PPPoE connection for the wan sorting out.

Cheers
Spart
#50
So does anyone have a tested recommendation for ESXI 6.7 U3 for a quad 1Gb copper Intel based Network adaptor that is performant?

Cheers
Spart
#51
Quote from: Supermule on November 27, 2022, 04:07:02 PM
As I said... we run 10gbit/s with X710-T4 nics and they are copper. :)

Yes, I understand but we do not currently have 10G infrastructure! And we don#t need to pay c. 400 Gbp for a card.

If the Pro 1000PT works its about 40 GBP!

Cheers
Spart
#52
Quote from: gctwnl on November 27, 2022, 04:20:07 PM
Thank you.

In the end, I used the a.b.c.50/29 address when setting up the WAN. This means I cannot create an alias for a.b.c.50 itself, but I can for a.b.c.51-54. So, I cannot make 5 aliases, just 4.

Outgoing traffic gets the a.b.c.50 IP unless I use source NAT. As it works now, I'd rather leave it alone.

Glad you got it sorted. I remember messing with this for a while when we first migrated from Untangle.

Cheers
Spart
#53
Quote from: Supermule on November 27, 2022, 04:05:27 PM
X710-T4 is 10Gbit/s and copper.

How many do you need?

I think it's 10G Copper not 1Gb copper.

Was thinking of something like this.

https://www.ebay.co.uk/itm/125310701535?hash=item1d2d198fdf:g:2YcAAOSwWC1ifNjh&amdata=enc%3AAQAHAAAA4PK9BXqm1PcvzcPNfI5azqrJ3iZs2TpSOT603digb4CUnbhSYEVIrynQPW0T2aJp6vNBiXU6YuH9fBw%2BuVgKZqPNitNlg36trw4886bCxxOGFzleR2xlf551ST5rWk0gzHgKIIPVwSUqEpSpOkI%2BNKQQqdtuDSr8cQR3gd76Sf7203asgCkoUh6N6GU0m7COAEygX2aoqiHuuUpATZFjgbFN0emnMEchFtPv3Bv2yVMXW3HMSlq4i1frs9wpBp8lva2A2lr8nmpyRk8upuqtg0qHGcfXvZbCQGd3oQFcW%2BrC%7Ctkp%3ABFBMjLGSmJdh

Pro 1000PT Quad port 1Gb Copper

Intel 82571eb chipset. Full support in ESXI 6.7 U3 and it seems BSD em driver.

The em driver supports Gigabit Ethernet adapters based on the Intel 82540, 82541ER, 82541PI, 82542, 82543, 82544, 82545, 82546, 82546EB, 82546GB, 82547, 82571, 82572, 82573, 82574, 82575, 82576, and 82580 controller chips:

Intel Gigabit ET Dual Port Server Adapter (82576)

Intel Gigabit VT Quad Port Server Adapter (82575)

Intel Single, Dual and Quad Gigabit Ethernet Controller (82580)

Intel i210 and i211 Gigabit Ethernet Controller

Intel i350 and i354 Gigabit Ethernet Controller

Intel PRO/1000 CT Network Connection (82547)

Intel PRO/1000 F Server Adapter (82543)

Intel PRO/1000 Gigabit Server Adapter (82542)

Intel PRO/1000 GT Desktop Adapter (82541PI)

Intel PRO/1000 MF Dual Port Server Adapter (82546)

Intel PRO/1000 MF Server Adapter (82545)

Intel PRO/1000 MF Server Adapter (LX) (82545)

Intel PRO/1000 MT Desktop Adapter (82540)

Intel PRO/1000 MT Desktop Adapter (82541)

Intel PRO/1000 MT Dual Port Server Adapter (82546)

Intel PRO/1000 MT Quad Port Server Adapter (82546EB)

Intel PRO/1000 MT Server Adapter (82545)

Intel PRO/1000 PF Dual Port Server Adapter (82571)

Intel PRO/1000 PF Quad Port Server Adapter (82571)

Intel PRO/1000 PF Server Adapter (82572)

Intel PRO/1000 PT Desktop Adapter (82572)

Intel PRO/1000 PT Dual Port Server Adapter (82571)

Intel PRO/1000 PT Quad Port Server Adapter (82571)

Intel PRO/1000 PT Server Adapter (82572)

Intel PRO/1000 T Desktop Adapter (82544)

Intel PRO/1000 T Server Adapter (82543)

Intel PRO/1000 XF Server Adapter (82544)

Intel PRO/1000 XT Server Adapter (82544)

Looking at the FreeBSD hardware list the vmx driver that is currently in use on our OpnSense is not even listed.

Cheers
#54
Hello,

Hoping someone has a recommendation for an Intel based Quad Port GB Nic card for my Dell R720 Esxi 6.7 host. We run OpnSense on Esxi and the network performance is not great. We have tried all the tweaking that has been posted to try and get OpnSense xBSD using the ESXI VMXNETx adaptors to run at anything approaching wire speed.

All the linux VM's in the host run at wire speed and across the vswitch approaching 10G but the OpnSense VM is not great performance-wise. I reaised issues with the BSD devs but no one has even looked at them. I can also see many other users posting about the network performance issues.

The R720 has a Dell branded Broadcom Quad port card. and I am thinking maybe we switch that to an Intel based card as other posters have said that is the best option as the em driver is the best supported and is in the kernel.

Does anyone have a tested recommendation for an intel based quad port Copper GB card?

Cheers
Spart


#55
Quote from: Supermule on November 27, 2022, 03:17:43 PM
Dont play the victim...

And VmWare is the marketleader in Virtualization and you can fairly easy break the 10gbit/s barrier with server grade hardware.

I have run pfsense virtualized since 2008 and couldnt even begin to grasp the prospect of running it bare metal.

There is very little overhead on Esxi regarding performance and we use X710 Nics from Intel. No issues WSE.

Again the attack is not necessary!

Maybe your experience with the Intel based Nics is the difference. The Dell R720 is enterprise HW but maybe the dell branded BroadCom Quad port NIC card is the issue.

We are on copper GB not fibre so maybe not the X710.

If anyone has a recommendation for a full height quad port intel card that has teh right support for ESXI that would be great.

Cheers
Spart
#56
As of Nov 22 VMARE has approx 26% of the global virtualisation market with MS approx 10% and Xen about 8% and the rest made up of the many other offerings out there.

My point in opening this dialogue was to understand the anchor to xBSD I think I have my answer and its the core dependency on PF.

I am not a OpnSense hater. I like the product as stated, there are challenges with performance on VMWARE it's not the end of the world for me at present as our backhaul as stated is sub 100mb.

But I do look at the future and wonder whether all of the OpnSense goodness would be better served on a more mainstream enterprise class Linux foundation. With orders of magnitude more resources into development. Arguments around security, scalability, reliability, resource management etc. are all moot these days.

It seems open discourse is hard and the standard approach is to attack the poster!

Cheers
Spart
#57
We have the same /29 network from our provider.

Simply setup a set of Virtual IP addresses on the WAN interface covering the 5 usable addresses and point them at the provided gateway address you will have been provided by your supplier ours is the top of the range+1 yours might be the bottom.

Our supplier uses a dynamic PPPOE address but assigns our block to it so all are routable publicly.

Then you can use them in FW rules etc.

Cheers
Spart

#58
Quote from: chemlud on November 27, 2022, 09:08:34 AM
Quote from: Supermule on November 26, 2022, 11:58:58 PM
Running bare metal is a waste of ressources.

EOD.

For you. Maybe..  :P

Our electricity price has tripled so no I don't want to proliferate multiple systems when I have a perfectly capable server to virtualize in!

And if xBSD would invest some time in fixing the drivers we would have parity performance.

Cheers
Spart
#59
Quote from: pmhausen on November 26, 2022, 06:46:35 PM

Personally if a Deciso appliance doesn't fit the bill I would not use anything less than some Supermicro server board with IPMI, ECC memory, and all the good stuff I'm used to. Actually that is precisely what I run at home currently. The board was left over after I upgraded my TrueNAS system (another very fine BSD based product, although picky about the hardware - surprise! ;)) So I bought just a Supermicro case, some Noctua fans, used left over SSDs and I am running OPNsense on server grade hardware with a ZFS mirror and definitely enough performance for all my home needs.

Kind regards,
Patrick

Yes, we have OpnSense running in ESXI on an ex Ebay Dell server. Plenty of enterprise goodness. Just a pity about the network performance. The server has a quad port NetXtreme BCM5720 maybe I need to swap it for an intel card and try that.

Cheers
Spart
#60
Quote from: Bob.Dig on November 26, 2022, 06:27:43 PM
Quote from: sparticle on November 26, 2022, 06:13:44 PM
I suspect a large proportion of the community are home network or similar to myself users.
And how much are those contributing with money?  ;)

I can see why your handle is bob.dig!

:)