OPNsense
  • Home
  • Help
  • Search
  • Login
  • Register

  • OPNsense Forum »
  • Profile of opnfwb »
  • Show Posts »
  • Messages
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts...
      • Messages
      • Topics
      • Attachments

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments

Messages - opnfwb

Pages: 1 ... 13 14 [15] 16 17 ... 23
211
21.1 Legacy Series / Re: Intermittent and transient network errors
« on: February 24, 2021, 06:14:25 am »
More info here about DoT with cert validation. https://www.ctrl.blog/entry/unbound-tls-forwarding.html

Unfortunately the OPNsense GUI doesn't offer the domain name function to allow cert validation at this time. If you want a fully secure DoT setup, you'll need something like this in your custom settings (be sure to remove the duplicate references the Miscellaneous section)

Code: [Select]
# TLS Config
tls-cert-bundle: "/etc/ssl/cert.pem"
# Forwarding Config
forward-zone:
name: "."
forward-tls-upstream: yes
forward-addr: 2620:fe::9@853#dns9.quad9.net
forward-addr: 9.9.9.9@853#dns9.quad9.net
forward-addr: 149.112.112.9@853#dns9.quad9.net

You can modify that to taste for whichever DoT provider you want to use.

If the WAN interface is on a network with private IP ranges (192.x, 172.x, 10.x, etc.), I would also suggest going to Interfaces/WAN and uncheck block private/block bogon networks.

Try those two things and see if it helps?

212
21.1 Legacy Series / Re: Intermittent and transient network errors
« on: February 24, 2021, 02:45:07 am »
How do you have the Unbound secure DNS configured for cloudflare? OPNsense is a little different than pfSense when it comes to getting a full DoT implementation, you'll need to use Custom Options.

Are there any in/out interface errors when viewing LAN/WAN interfaces by selecting Interfaces/Overview within the OPNsense GUI?

213
21.1 Legacy Series / Re: Installer hangs at "Select Task" on Hyper-V Generation 2 VM
« on: February 17, 2021, 10:20:43 pm »
I have a similar setup but I have had success getting OPNsense installed. I did have to use the CTRL+C trick once, but the install finished after that.

My Specs:
Gen2 VM
2 vCPU
1024MB RAM (uncheck dynamic memory)
16GB disk

I attached the ISO to the DVD drive of the VM, booted it, and installed.

214
Hardware and Performance / Re: Installation stuck at 38% ...
« on: February 15, 2021, 08:55:59 pm »
One other thing that may be worth trying, did you download the ISO installer or the VGA .img installer? The wording on the OPNsense website seems to indicate the ISO may have better support for non-UEFI MBR systems like this one (they specifically state the vga installer boots as GPT). If you haven't tried the ISO, I would download that and you can use dd to write it to the same USB stick, then boot it from there and see if the install works better?

215
Hardware and Performance / Re: Installation stuck at 38% ...
« on: February 15, 2021, 07:09:06 pm »
The system used for install is nearly 15 years old at this point. Most of those early Core2 systems were right at the cusp of UEFI support, they can have issues with GPT partitions especially when using an older BIOS.

I'd make sure the BIOS is the latest version available, and also run a memtest just to rule out any hardware weirdness. Try enabling UEFI in the BIOS (if it's available) and try to use GPT partitions. If that won't work, turn off UEFI and choose MBR, one of them should get you up and running on old hardware like that. The install log you posted seems to indicate it's attempting a GPT EFI partition. Maybe also try to re-write the OPNsense image to a different USB stick, just to rule out a USB stick issue, and try to use it in a different USB port? Just throwing out some other possible ideas that may help.

I'm also a migrant from pfSense and every system I have used pfSense on easily runs OPNsense and the install process has been identical. I've had some small issues when installing on a VM but on bare metal hardware, it was always the same steps for installation.

216
Hardware and Performance / Re: Installation stuck at 38% ...
« on: February 15, 2021, 06:29:48 pm »
Device da0 is the USB stick. It could indicate a bad USB stick, a bad image on the USB stick, or if you are booting from the USB stick and selecting the USB stick as the install source, that will also cause problems.

Device ada0 is the internal hard disk.

During install, how are you booting OPNsense and which device are you selecting to receive the installation?

Generally if you're booting from a USB stick you'll want to pick device ada0 as the installation destination, as that is the first physical hard disk that is enumerated for bootup.

217
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: February 11, 2021, 10:27:08 pm »
Quote from: DiHydro on February 11, 2021, 09:40:20 pm
I am curious if I am seeing this kernel problem on my bare-metal install. I have a passively cooled mini PC with 4 Intel NICs and a J1900 CPU at 2.00GHz and 4 GB of RAM. I know this CPU is fairly old, but the hardware sizing guide says I should be able to do 350-750 Mbit/s throughput. When I have no firewall rules enabled and the default IPS settings I get about 370-380 Mbit/s of my 400 Mbit/s inbound speed. If I enable firewall rules to set up fq_codel, then it drops my throughput to 320-340 Mbit/s. In both of these scenarios I see my CPU going up to 90+% on one thread. I do understand that my throughput will go down with different options like IPS and firewall rules, but I would think that with no other options running this hardware should be able to do better than 380 Mbit/s tops.
Using FQ_Codel or IPS are more secondary to the overall discussion here. Both of these will consume a large amount of CPU cycles and won't illustrate the true throughput capabilities of the firewall due to their own inherent overhead.

I run a J3455 with a quad port Intel I340 NIC, and can easily push 1gigabit with the stock ruleset and have plenty of CPU overhead remaining. This unit can also enable FQ_Codel on WAN and still push 1gigabit, although CPU usage does increase around 20% at 1gigabit speeds.

I don't personally run any of the IPS components so I don't have any direct feedback on that. It's worth noting that both of these tests are done on a traditional DHCP WAN connection. If you're using PPPoE, that will be single thread bound and will limit your throughput to the maximum speed of a single core.

What most of the transfer speed tests are illustrating here are that FreeBSD seems to have very poor scaling when using 10gbit virtualized NICs and forwarding packets. This isn't an OPNsense induced issue, more of an issue that OPNsense gets stuck with due to the poor upstream support from FreeBSD. For the vast majority of users on 1gigabit or lower connections, this won't be a cause for concern in the near future.

218
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: February 10, 2021, 06:13:23 pm »
Here are my latest results.

Recap of my environment:
Server is HP ML10v2 ESXi 6.7 running build 17167734
Xeon E3-1220 v3 CPU
32GB of RAM
SSD/HDD backed datastore (vSAN enabled)

All firewalls are tested with their "out of the box" ruleset, no customizations were made besides configure WAN/LAN adapters to work for these tests. All firewalls have their version of VM Tools installed from the package manager.

The iperf3 client/server are both Fedora Desktop v33. The server sits behind the WAN interface, the client sits behind the LAN interface to simulate traffic through the firewall. No transfer tests are performed hosting iperf3 on the firewall itself.

OPNSense 21.1.1 VM Specs:
VM hardware version 14
2 vCPU
4GB RAM
2x vmx3 NICs

pfSense 2.5.0-RC VM Specs:
VM hardware version 14
2 vCPU
4GB RAM
2x vmx3 NICs

OpenWRT VM Specs:
VM hardware version 14
2 vCPU
1GB RAM
2x vmx3 NICs

Code: [Select]
OPNsense 21.1.1 (netflow disabled) 1500MTU receiving from WAN, vmx3 NICs, all hardware offload disabled, single thread (p1)
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  8.10 GBytes  1.16 Gbits/sec  219             sender
[  5]   0.00-60.00  sec  8.10 GBytes  1.16 Gbits/sec                  receiver

Code: [Select]
OPNsense 21.1.1 (netflow disabled) 1500MTU receiving from WAN, vmx3 NICs, all hardware offload disabled, four thread (p4)
[ ID] Interval           Transfer     Bitrate         Retr
[SUM]   0.00-60.00  sec  13.4 GBytes  1.91 Gbits/sec  2752             sender
[SUM]   0.00-60.00  sec  13.3 GBytes  1.91 Gbits/sec                  receiver

Code: [Select]
OPNsense 21.1.1 (netflow disabled) 1500MTU receiving from WAN, vmx3 NICs, all hardware offload enabled, single thread (p1)
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   251 MBytes  35.0 Mbits/sec  56410             sender
[  5]   0.00-60.00  sec   250 MBytes  35.0 Mbits/sec                  receiver

Code: [Select]
pfSense 2.5.0-RC 1500MTU receiving from WAN, vmx3 NICs, all hardware offload disabled, single thread (p1)
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  15.1 GBytes  2.15 Gbits/sec  1029             sender
[  5]   0.00-60.00  sec  15.0 GBytes  2.15 Gbits/sec                  receiver

Code: [Select]
pfSense 2.5.0-RC 1500MTU receiving from WAN, vmx3 NICs, all hardware offload disabled, four thread (p4)
[ ID] Interval           Transfer     Bitrate         Retr
[SUM]   0.00-60.00  sec  15.3 GBytes  2.19 Gbits/sec  12807             sender
[SUM]   0.00-60.00  sec  15.3 GBytes  2.18 Gbits/sec                  receiver

Code: [Select]
pfSense 2.5.0-RC 1500MTU receiving from WAN, vmx3 NICs, all hardware offload enabled, single thread (p1)
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   316 MBytes  44.2 Mbits/sec  48082             sender
[  5]   0.00-60.00  sec   316 MBytes  44.2 Mbits/sec                  receiver

Code: [Select]
OpenWRT v19.07.6 1500MTU receiving from WAN, vmx3 NICs, no UI offload settings (using defaults), single thread (p1)
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  34.1 GBytes  4.88 Gbits/sec  21455             sender
[  5]   0.00-60.00  sec  34.1 GBytes  4.88 Gbits/sec                  receiver

Code: [Select]
OpenWRT v19.07.6 1500MTU receiving from WAN, vmx3 NICs, no UI offload settings (using defaults), four thread (p4)
[ ID] Interval           Transfer     Bitrate         Retr
[SUM]   0.00-60.00  sec  43.2 GBytes  6.18 Gbits/sec  79765             sender
[SUM]   0.00-60.00  sec  43.2 GBytes  6.18 Gbits/sec                  receiver


host CPU usage during the transfer was as follows:
OPNsense 97% host CPU used
pfSense 84% host CPU used
OpenWRT 63% host CPU used for p1, 76% host CPU used for p4

In this case, my environment is CPU constrained. However, the purpose of these transfers is to use a best case scenario (all 1500MTU packets) and see how much we can push through the firewall with the given CPU power available. I think we're still dealing with inherent bottlenecks within FreeBSD 12. Both of the BSDs here hit high host CPU usage regardless of the thread count during the transfer. Only the Linux system scaled with more threads and still did not max the host CPU during transfers.

I personally use OPNsense and it's a great firewall. Running on bare metal hardware with IGB NICs and a modern processor made within the last 5 years or so, it will be plenty to cover gigabit speeds for most people. However, if we are virtualizing in an environment all of the BSDs seem to want a lot of CPU power to be able to scale beyond a steady 1GB/s. Perhaps FreeBSD 13 will give us more efficient virtualization throughput?

219
Hardware and Performance / Re: $300 Ryzen Build?
« on: February 03, 2021, 10:55:34 pm »
That should work and give you the overhead you need. I think the main issue is that the hardware will use an order of magnitude more power and would require active cooling.

For instance, I run a passively cooled J3455 CPU with OPNsense and it can easily push gigabit. I have not used it with Sensei or IPS, so I can't give you feed back there. However, the power consumption is very low. With a quad port I340 NIC and a DC/DC power supply, the system uses 10-11watts. Contrast this with most AC/DC desktop PSUs and lower efficiency desktop hardware, and they will easily consume 30-40watts idling, and nearly double that during load events. Just things to consider if noise/power may be an issue over time.

220
21.1 Legacy Series / Re: Installer freezes at installation type selection screen on ESXi
« on: February 03, 2021, 10:44:40 pm »
I've had this happen with 21.1 and a few times in the past with older version. CTRL+C to get back to the prompt, then re-run the installer. You should be good to go on the 2nd try.

221
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: October 22, 2020, 04:06:09 pm »
Quote from: mimugmail on October 22, 2020, 07:27:38 am
Be honest to yourself, would you buy a piece of hardware with only 2 cores if you have to requirement for 10G? The smallest hardware with 10 interfaces has 4 core minimum.
I think we may be talking past each other here. I'm not talking about purchasing hardware. I'm discussing a lack of throughput that now exists after an upgrade on hardware that performs at a much higher rate with just a software change. That's why we're running tests on multiple VMs, all with the same specs. There's obviously some bottleneck occurring here that isn't just explained away by core count (or lack thereof).

Quote from: mimugmail on October 19, 2020, 07:38:33 pm
I have customers pushing 6Gbit over vmxnet driver.
I'm more interested in trying to understand what is different in my environment that is causing these issues on VMs? Is this claimed 6Gbit going through a virtualized OPNsense install?. Do you have any additional details that we can check? I've even tried to change CPU core assignment (change number of sockets to 1, and add cores) to see if there was some weird NUMA scaling issue impacting OPNsense. So far everything I have tried to do has had no impact on throughput, even switching to the beta netmap kernel that is supposed to resolve some of this did not seem to work yet?

222
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: October 22, 2020, 05:03:05 am »
It is odd that so many of us seem to find an artificial ~1gbps limit when testing OPNsense 20.7 on VMware ESXi and vmxnet3 adapters. It looks like there's at least 3 of us that are able to re-produce these results now?

I've disable the hardware blacklist and did not see a difference in my test results from what I had posted here prior. The only way I can get a little bit better throughput is to add more vCPU to the OPNsense VM, however this does not scale well. For instance, if I go from 2vCPU to 4vCPU, I can start to get between 1.5gbps and 2.2gbps depending on how much parallelism I select on my iperf clients.

223
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: October 16, 2020, 11:02:43 pm »
I probably should have clarified on that. I tested both *sense based distros just show that they both see a hit with the FreeBSD 12.x kernel. I don't think this is out of malicious intent from either side, just teething issues due to the new way that the 12.x kernel pushes packets. I'm NOT trying to compare OPNsense to pfSense, I merely wanted to show that they both see a hit moving to 12.x.

There is an upside to all of this. I'm running OPNsense 20.7.3 on bare metal at home with the stock kernel. With the FreeBSD 12.x implementations I no longer need to leave FQ_Codel shaping enabled to get A+ scores on my 500/500 Fiber connection. It seems the way that FreeBSD 12.x handles transfer queues is much more efficient. I'm sure as time moves forward this will all get worked out. I'm posting here mainly just to show what I am seeing, and hopefully we can see the numbers get better as newer kernels are integrated.

224
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: October 16, 2020, 06:22:22 pm »
I tried re-running these tests with OPNsense 20.7.3 and also tried the netmap kernel. For my particular case, this did not result in a change in throughput.

I'll recap my environment:
HP Server ML10v2/Xeon E3 1220v3/32GB of RAM

VM configurations:
Each pfSense and OPNsense VM has 2vCPU/4GB RAM/VMX3 NICs
Each pfSense and OPNsense VM has default settings and all hardware offloading disabled

The OPNsense netmap kernel was tested by doing the following:
Code: [Select]
opnsense-update -kr 20.7.3-netmap
reboot

When running these iperf3 tests, each test was run for 60 seconds, all test were run twice and the last test result is recorded here to allow some of the firewalls time to "warm up" to the throughput load. All tests were perform on the same host, and two VMs were used to simulate a WAN/LAN configuration with separate vSwitches. This allows us to push traffic through the firewall, instead of using the firewall as an iperf3 client.

Below are my results from today:

Code: [Select]
pfSense 2.5.0Build_10-16-20 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  14.8 GBytes  2.12 Gbits/sec  550             sender
[  5]   0.00-60.00  sec  14.8 GBytes  2.12 Gbits/sec                  receiver

Code: [Select]
pfSense 2.4.5p1 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  29.4 GBytes  4.21 Gbits/sec  12054             sender
[  5]   0.00-60.00  sec  29.4 GBytes  4.21 Gbits/sec                  receiver

Code: [Select]
OpenWRT 19.07.3 1500MTU receiving from WAN, vmx3 NICs, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  44.1 GBytes  6.31 Gbits/sec  40490             sender
[  5]   0.00-60.00  sec  44.1 GBytes  6.31 Gbits/sec                  receiver

Code: [Select]
OPNsense 20.7.3 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  5.39 GBytes   771 Mbits/sec  362             sender
[  5]   0.00-60.00  sec  5.39 GBytes   771 Mbits/sec                  receiver

Code: [Select]
OPNsense 20.7.3(netflow disabled) 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  6.66 GBytes   953 Mbits/sec  561             sender
[  5]   0.00-60.00  sec  6.66 GBytes   953 Mbits/sec                  receiver

Code: [Select]
OPNsense 20.7.3(netmap kernel) 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  5.35 GBytes   766 Mbits/sec  434             sender
[  5]   0.00-60.00  sec  5.35 GBytes   766 Mbits/sec                  receiver

Code: [Select]
OPNsense 20.7.3(netmap kernel, netflow disabled) 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  6.55 GBytes   937 Mbits/sec  399             sender
[  5]   0.00-60.00  sec  6.55 GBytes   937 Mbits/sec                  receiver


225
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: September 03, 2020, 08:05:39 pm »
Quote from: mimugmail on September 03, 2020, 06:15:29 am
My first thought was maybe shared forwarding, but you have this with pfsense 2.5 too, correct?
I tried this with the recent build of pfSense 2.5 Development (built 9/2/2020) and was able to get around 2.0gbits/sec using the same test scenario that I posted about yesterday. So it is still lower throughput than pfSense 2.4.x running on FreeBSD 11.2 in the same test scenario, however it's still higher than what we're seeing with the OPNsense 20.7 series running the 12.x kernel.

Pages: 1 ... 13 14 [15] 16 17 ... 23
OPNsense is an OSS project © Deciso B.V. 2015 - 2024 All rights reserved
  • SMF 2.0.19 | SMF © 2021, Simple Machines
    Privacy Policy
    | XHTML | RSS | WAP2