Why BSD base. Why not Linux base?

Started by sparticle, November 26, 2022, 02:11:53 PM

Previous topic - Next topic
Quote from: chemlud on November 27, 2022, 09:08:34 AM
Quote from: Supermule on November 26, 2022, 11:58:58 PM
Running bare metal is a waste of ressources.

EOD.

For you. Maybe..  :P

Our electricity price has tripled so no I don't want to proliferate multiple systems when I have a perfectly capable server to virtualize in!

And if xBSD would invest some time in fixing the drivers we would have parity performance.

Cheers
Spart

There is a lot of work being put in making FreeBSD perform top notch on AWS EC2 and Firecracker. The performance leaves a bit to be desired in ESXi, admitted. Maybe a KVM based hypervisor would be an option for you?

As I said: if only VMware would support open standards and VirtIO.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

Quote from: sparticle on November 26, 2022, 04:16:21 PM
Quote from: pmhausen on November 26, 2022, 03:58:00 PM
OPNsense's basic architecture is built on the pf packet filter - which is BSD only.
Of course you can build a Linux based firewall, but it wouldn't be OPNsense.

OpenWRT and IPfire exist.

They do exist but are clunky. I came here from Untangle.


So, you came here and ask OPN to move over to Linux. Did you also asked at IPfire and WRT to be not so clunky anymore? :)

As of Nov 22 VMARE has approx 26% of the global virtualisation market with MS approx 10% and Xen about 8% and the rest made up of the many other offerings out there.

My point in opening this dialogue was to understand the anchor to xBSD I think I have my answer and its the core dependency on PF.

I am not a OpnSense hater. I like the product as stated, there are challenges with performance on VMWARE it's not the end of the world for me at present as our backhaul as stated is sub 100mb.

But I do look at the future and wonder whether all of the OpnSense goodness would be better served on a more mainstream enterprise class Linux foundation. With orders of magnitude more resources into development. Arguments around security, scalability, reliability, resource management etc. are all moot these days.

It seems open discourse is hard and the standard approach is to attack the poster!

Cheers
Spart

Dont play the victim...

And VmWare is the marketleader in Virtualization and you can fairly easy break the 10gbit/s barrier with server grade hardware.

I have run pfsense virtualized since 2008 and couldnt even begin to grasp the prospect of running it bare metal.

There is very little overhead on Esxi regarding performance and we use X710 Nics from Intel. No issues WSE.

Quote from: Supermule on November 27, 2022, 03:17:43 PM
Dont play the victim...

And VmWare is the marketleader in Virtualization and you can fairly easy break the 10gbit/s barrier with server grade hardware.

I have run pfsense virtualized since 2008 and couldnt even begin to grasp the prospect of running it bare metal.

There is very little overhead on Esxi regarding performance and we use X710 Nics from Intel. No issues WSE.

Again the attack is not necessary!

Maybe your experience with the Intel based Nics is the difference. The Dell R720 is enterprise HW but maybe the dell branded BroadCom Quad port NIC card is the issue.

We are on copper GB not fibre so maybe not the X710.

If anyone has a recommendation for a full height quad port intel card that has teh right support for ESXI that would be great.

Cheers
Spart

X710-T4 is 10Gbit/s and copper.

How many do you need?

November 27, 2022, 04:09:09 PM #22 Last Edit: November 27, 2022, 04:21:36 PM by sparticle
Quote from: Supermule on November 27, 2022, 04:05:27 PM
X710-T4 is 10Gbit/s and copper.

How many do you need?

I think it's 10G Copper not 1Gb copper.

Was thinking of something like this.

https://www.ebay.co.uk/itm/125310701535?hash=item1d2d198fdf:g:2YcAAOSwWC1ifNjh&amdata=enc%3AAQAHAAAA4PK9BXqm1PcvzcPNfI5azqrJ3iZs2TpSOT603digb4CUnbhSYEVIrynQPW0T2aJp6vNBiXU6YuH9fBw%2BuVgKZqPNitNlg36trw4886bCxxOGFzleR2xlf551ST5rWk0gzHgKIIPVwSUqEpSpOkI%2BNKQQqdtuDSr8cQR3gd76Sf7203asgCkoUh6N6GU0m7COAEygX2aoqiHuuUpATZFjgbFN0emnMEchFtPv3Bv2yVMXW3HMSlq4i1frs9wpBp8lva2A2lr8nmpyRk8upuqtg0qHGcfXvZbCQGd3oQFcW%2BrC%7Ctkp%3ABFBMjLGSmJdh

Pro 1000PT Quad port 1Gb Copper

Intel 82571eb chipset. Full support in ESXI 6.7 U3 and it seems BSD em driver.

The em driver supports Gigabit Ethernet adapters based on the Intel 82540, 82541ER, 82541PI, 82542, 82543, 82544, 82545, 82546, 82546EB, 82546GB, 82547, 82571, 82572, 82573, 82574, 82575, 82576, and 82580 controller chips:

Intel Gigabit ET Dual Port Server Adapter (82576)

Intel Gigabit VT Quad Port Server Adapter (82575)

Intel Single, Dual and Quad Gigabit Ethernet Controller (82580)

Intel i210 and i211 Gigabit Ethernet Controller

Intel i350 and i354 Gigabit Ethernet Controller

Intel PRO/1000 CT Network Connection (82547)

Intel PRO/1000 F Server Adapter (82543)

Intel PRO/1000 Gigabit Server Adapter (82542)

Intel PRO/1000 GT Desktop Adapter (82541PI)

Intel PRO/1000 MF Dual Port Server Adapter (82546)

Intel PRO/1000 MF Server Adapter (82545)

Intel PRO/1000 MF Server Adapter (LX) (82545)

Intel PRO/1000 MT Desktop Adapter (82540)

Intel PRO/1000 MT Desktop Adapter (82541)

Intel PRO/1000 MT Dual Port Server Adapter (82546)

Intel PRO/1000 MT Quad Port Server Adapter (82546EB)

Intel PRO/1000 MT Server Adapter (82545)

Intel PRO/1000 PF Dual Port Server Adapter (82571)

Intel PRO/1000 PF Quad Port Server Adapter (82571)

Intel PRO/1000 PF Server Adapter (82572)

Intel PRO/1000 PT Desktop Adapter (82572)

Intel PRO/1000 PT Dual Port Server Adapter (82571)

Intel PRO/1000 PT Quad Port Server Adapter (82571)

Intel PRO/1000 PT Server Adapter (82572)

Intel PRO/1000 T Desktop Adapter (82544)

Intel PRO/1000 T Server Adapter (82543)

Intel PRO/1000 XF Server Adapter (82544)

Intel PRO/1000 XT Server Adapter (82544)

Looking at the FreeBSD hardware list the vmx driver that is currently in use on our OpnSense is not even listed.

Cheers

I'd use PCIe passthrough if I was running OPNsense in ESXi. Only one 10G interface as trunk to switch necessary, rest can be done in VLANs. Or two if you want multi chassis LACP for redundancy. Router on a stick ...
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

For security reasons PCI passthrough is not recommended.

Quote from: Supermule on November 27, 2022, 05:22:08 PM
For security reasons PCI passthrough is not recommended.
Care to elaborate?
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)


Using FreeBSD allows things like this:
https://forum.opnsense.org/index.php?topic=25540.0

Also worth mentioning is (assumed compatible hardware) that OPNsense is rock solid. I can upgrade from an old version to the current version without any issues. Thats what comes with FreeBSD. When upgrading Ubuntu, Debian or any other linux distri things may break. With FreeBSD the system (in my experience) can run with little maintenance. Also under full load the system stays stable not dropping any connection. FreeBSD also means a smaller attack surface.
i want all services to run with wirespeed and therefore run this dedicated hardware configuration:

AMD Ryzen 7 9700x
ASUS Pro B650M-CT-CSM
64GB DDR5 ECC (2x KSM56E46BD8KM-32HA)
Intel XL710-BM1
Intel i350-T4
2x SSD with ZFS mirror
PiKVM for remote maintenance

private user, no business use

Quote from: Supermule on November 27, 2022, 07:20:50 PM
https://www.tenable.com/audits/items/CIS_VMware_ESXi_6.7_v1.2.0_L1.audit:1d17d57677b4afb74b44266c06e9f728
So you are worried about the guest OS running in your firewall VM which is your frontmost line of defense attacking your hypervisor host through PCIe passthrough?

OK ... you do you, I guess. I'd worry about more productive things. If I would not trust the OPNsense code, I would use a different firewall in the first place.
Deciso DEC750
People who think they know everything are a great annoyance to those of us who do. (Isaac Asimov)

Quote from: Supermule on November 27, 2022, 05:22:08 PM
For security reasons PCI passthrough is not recommended.
Then it seems to be virtualization is not recommended. I'm not inclined to run my NVMe ZFS mirror and Intel ethernet adapters through virtualization on a router, just doesn't seem right to me. It's more of a bare metal scenario. That's why I went with a dedicated host. Thought about using it for a NAS as well but decided I didn't want to mess with it.

Quote from: sparticle on November 26, 2022, 06:13:44 PM
Maybe my hardware choices are the issue. VM performance is not great compared to Linux, driver issues abound.

Dedicated HW like the link you provided I can understand.
Yeah it seems to me your choices are not suited for the product, and you come here seriously asking them to change the operating system and packet filtering used just to suit your scenario, rather than build a dedicated router that will run right?

Quote from: sparticle on November 27, 2022, 01:45:14 PM
Our electricity price has tripled so no I don't want to proliferate multiple systems when I have a perfectly capable server to virtualize in!

And if xBSD would invest some time in fixing the drivers we would have parity performance.
My dedicated OPNsense router uses very little electricity and that is part of why it is dedicated. I want it to be one of the last things running if and when I'm on back up power. I don't need some giant (possibly outdated?) server running 30 different things to keep going just to keep my router alive. It uses basically no CPU or RAM or disk on the machine, even with Suricata and such running, and that's how I like it.

Maybe I'm mistaken and virtualization is the way to run this kind of router. To me, for my home network, it did not make sense.

It's a FOSS product right? Fork it if you want I guess. And then maybe you'd realize you'd be starting all over from the ground up to change the operating system. Don't like the drivers? Fix them. Don't know what it would take to fix them or how to do it? Maybe don't dictate to other people what they do with their skills and time.