Hello
Currently running OPN in a VM environment with overkill 4 CPU's and 64GB RAM and 128GB HD.....LAN is a 40Mb nothing remotely fast but fast enough... Wanted to get out of the VM scenario and I have a spare Intel Celeron GR900 3.10Ghz, 16 GB RAM and 500gb HD...Will this suffice?
Typical appliance style deployments are in the order of 8 G of memory and a 128 G SSD or similar so these parameters are definitely sufficient on the verge of overkill.
I lack the experience to rate specific CPUs - please provide some more information about your upstream bandwidth and the services you intend to run so others can give some helpful advice.
Also the actual chipset of the network interfaces is definitely relevant. What about them?
Well.
My Network consists of 5 Static IP's... I am running 2 separate VM's for [low volume] personal emails. Gaming.. Nothing intense this is just a home setup.. Cameras..Alexas..Id say never more than 50 devices at one time and I am talaking 90% of those the litter box, ring, random devices.
My LAN Interface is a new/er 10Gbps Card Intel X540-T1 and my WAN would be a Intel I350-T4 1Gb.
I have 40Mbs [4.0 MB] down and 2 Mb [200K Upload] Speed. Nothing to write home about.
Then it depends more on what you expect from the LAN interface if, for example, multiple VLANs are on that and you have inter-VLAN traffic. For the WAN speed and if the OpnSense is being used as a gateway to the 40 Mbit/s WAN only, the Celeron G4900 CPU would surely suffice. Then again, a 10 GBit/s LAN interface would be overkill and only waste energy with such a setup.
My only purpose of the 10Gnps is because I am running 5 different machines, some VM..And I like the inter workings [which actually all is being routed through a Cisco SG350XG 10Gbps Switch] connected to the LAN on the opnsense...The SG 350 has all of the networks and intervlan there, so maybe my LAN on OPnsense does not even need to be 10gb if everything routes "locally" through SG
Added another NIC.. Passed em both through. VM loads up all nice but sees neither NIC cards... So I went the other way, made bridge connections and added them, both NIC's found on the VM. So that works.
Does it really make sense to first separate out different VLANs and then routing them through the switch, which presumably does not have rules to regulate that inter-VLAN traffic when you have a capable firewall that could do it for you on a more fine-grained level? Just asking...
But correct, that keeps the inter-VLAN traffic out of your OpnSense, lowering its hardware requirements.
My setup is this;
Opnsense;
WAN
LAN
LAN is 172.16.2.1 and connects to SG350XG which hosts 6 Networks and 6 vlans. The LAN on OPNSense is just for routing and firewall, everything is done
through SG350XG. If my Opnsense had 6 Interfaces I would do it all on there. There are NO vlans or any routing on the Opnsense, everything has a static route
to the sg350xg.
Anyway I am just trying different things
I believe @meyergru is suggesting a setup similar to mine. My switches just know about the VLANs and are only configured for tagging/untagging as appropriate.
No routing is done there. All inter-VLAN routing is performed on OPN, using typical FW rules controlling ingress per interface/VLAN at the desired granularity.
Same GUI to control all the rules. Same reporting. I only have to tinker with the switches when I add new devices (configure an access port).
All intra-VLAN traffic (modulo traffic to/from the VLAN gateway) never reaches OPN.
Inter-VLAN traffic negligible compared to Intranet->Internet traffic in my case, so the additional load is not a concern.