OPNsense
  • Home
  • Help
  • Search
  • Login
  • Register

  • OPNsense Forum »
  • Profile of opnfwb »
  • Show Posts »
  • Messages
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts...
      • Messages
      • Topics
      • Attachments

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments

Messages - opnfwb

Pages: [1] 2 3 ... 8
1
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: October 22, 2020, 04:06:09 pm »
Quote from: mimugmail on October 22, 2020, 07:27:38 am
Be honest to yourself, would you buy a piece of hardware with only 2 cores if you have to requirement for 10G? The smallest hardware with 10 interfaces has 4 core minimum.
I think we may be talking past each other here. I'm not talking about purchasing hardware. I'm discussing a lack of throughput that now exists after an upgrade on hardware that performs at a much higher rate with just a software change. That's why we're running tests on multiple VMs, all with the same specs. There's obviously some bottleneck occurring here that isn't just explained away by core count (or lack thereof).

Quote from: mimugmail on October 19, 2020, 07:38:33 pm
I have customers pushing 6Gbit over vmxnet driver.
I'm more interested in trying to understand what is different in my environment that is causing these issues on VMs? Is this claimed 6Gbit going through a virtualized OPNsense install?. Do you have any additional details that we can check? I've even tried to change CPU core assignment (change number of sockets to 1, and add cores) to see if there was some weird NUMA scaling issue impacting OPNsense. So far everything I have tried to do has had no impact on throughput, even switching to the beta netmap kernel that is supposed to resolve some of this did not seem to work yet?

2
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: October 22, 2020, 05:03:05 am »
It is odd that so many of us seem to find an artificial ~1gbps limit when testing OPNsense 20.7 on VMware ESXi and vmxnet3 adapters. It looks like there's at least 3 of us that are able to re-produce these results now?

I've disable the hardware blacklist and did not see a difference in my test results from what I had posted here prior. The only way I can get a little bit better throughput is to add more vCPU to the OPNsense VM, however this does not scale well. For instance, if I go from 2vCPU to 4vCPU, I can start to get between 1.5gbps and 2.2gbps depending on how much parallelism I select on my iperf clients.

3
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: October 16, 2020, 11:02:43 pm »
I probably should have clarified on that. I tested both *sense based distros just show that they both see a hit with the FreeBSD 12.x kernel. I don't think this is out of malicious intent from either side, just teething issues due to the new way that the 12.x kernel pushes packets. I'm NOT trying to compare OPNsense to pfSense, I merely wanted to show that they both see a hit moving to 12.x.

There is an upside to all of this. I'm running OPNsense 20.7.3 on bare metal at home with the stock kernel. With the FreeBSD 12.x implementations I no longer need to leave FQ_Codel shaping enabled to get A+ scores on my 500/500 Fiber connection. It seems the way that FreeBSD 12.x handles transfer queues is much more efficient. I'm sure as time moves forward this will all get worked out. I'm posting here mainly just to show what I am seeing, and hopefully we can see the numbers get better as newer kernels are integrated.

4
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: October 16, 2020, 06:22:22 pm »
I tried re-running these tests with OPNsense 20.7.3 and also tried the netmap kernel. For my particular case, this did not result in a change in throughput.

I'll recap my environment:
HP Server ML10v2/Xeon E3 1220v3/32GB of RAM

VM configurations:
Each pfSense and OPNsense VM has 2vCPU/4GB RAM/VMX3 NICs
Each pfSense and OPNsense VM has default settings and all hardware offloading disabled

The OPNsense netmap kernel was tested by doing the following:
Code: [Select]
opnsense-update -kr 20.7.3-netmap
reboot

When running these iperf3 tests, each test was run for 60 seconds, all test were run twice and the last test result is recorded here to allow some of the firewalls time to "warm up" to the throughput load. All tests were perform on the same host, and two VMs were used to simulate a WAN/LAN configuration with separate vSwitches. This allows us to push traffic through the firewall, instead of using the firewall as an iperf3 client.

Below are my results from today:

Code: [Select]
pfSense 2.5.0Build_10-16-20 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  14.8 GBytes  2.12 Gbits/sec  550             sender
[  5]   0.00-60.00  sec  14.8 GBytes  2.12 Gbits/sec                  receiver

Code: [Select]
pfSense 2.4.5p1 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  29.4 GBytes  4.21 Gbits/sec  12054             sender
[  5]   0.00-60.00  sec  29.4 GBytes  4.21 Gbits/sec                  receiver

Code: [Select]
OpenWRT 19.07.3 1500MTU receiving from WAN, vmx3 NICs, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  44.1 GBytes  6.31 Gbits/sec  40490             sender
[  5]   0.00-60.00  sec  44.1 GBytes  6.31 Gbits/sec                  receiver

Code: [Select]
OPNsense 20.7.3 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  5.39 GBytes   771 Mbits/sec  362             sender
[  5]   0.00-60.00  sec  5.39 GBytes   771 Mbits/sec                  receiver

Code: [Select]
OPNsense 20.7.3(netflow disabled) 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  6.66 GBytes   953 Mbits/sec  561             sender
[  5]   0.00-60.00  sec  6.66 GBytes   953 Mbits/sec                  receiver

Code: [Select]
OPNsense 20.7.3(netmap kernel) 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  5.35 GBytes   766 Mbits/sec  434             sender
[  5]   0.00-60.00  sec  5.35 GBytes   766 Mbits/sec                  receiver

Code: [Select]
OPNsense 20.7.3(netmap kernel, netflow disabled) 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  6.55 GBytes   937 Mbits/sec  399             sender
[  5]   0.00-60.00  sec  6.55 GBytes   937 Mbits/sec                  receiver


5
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: September 03, 2020, 08:05:39 pm »
Quote from: mimugmail on September 03, 2020, 06:15:29 am
My first thought was maybe shared forwarding, but you have this with pfsense 2.5 too, correct?
I tried this with the recent build of pfSense 2.5 Development (built 9/2/2020) and was able to get around 2.0gbits/sec using the same test scenario that I posted about yesterday. So it is still lower throughput than pfSense 2.4.x running on FreeBSD 11.2 in the same test scenario, however it's still higher than what we're seeing with the OPNsense 20.7 series running the 12.x kernel.

6
Hardware and Performance / Re: Poor Throughput (Even On Same Network Segment)
« on: September 02, 2020, 11:37:08 pm »
Just wanted to post here due to the excellent testing from OP and to corroborate the numbers that OP is seeing.

My testing setup is as follows:
ESXi 6.7u3, host has an E3 1220v3 and 32GB of RAM
All Firewall VMs have 2vCPU. 5GB of RAM allocated to OPNsense.
VMXnet3 NICs negotiated at 10gbps

In pfSense and OPNsense, I disabled all of the hardware offloading features. I am using client and server VMs on the WAN and LAN sides of the firewall VMs. This means I am pushing/pulling traffic through the firewalls, I am not running iperf directly on any of the firewalls themselves. Because I am doing this on a single ESXi host and the traffic is within the same host/vSwitch, the traffic is never routed to my physical network switch and therefore I can test higher throughput.

pfSense and OPNsense were both out of the box installs with their default rulesets. I did not add any packages or make any config changes outside of making sure that all hardware offloading was disabled. All iperf3 tests were run with the LAN side client pulling traffic through the WAN side interface, to simulate a large download. However, if I perform upload tests, my throughput results are the same. All iperf3 tests were run for 60 seconds and used the default MTU of 1500. The results below show the average of the 60 second runs. I ran each test twice, and used the final result to allow the firewalls to "warm up" and stabilize with their throughput during testing.

Code: [Select]
pfSense 2.4.5p1 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  31.5 GBytes  4.50 Gbits/sec  11715             sender
[  5]   0.00-60.00  sec  31.5 GBytes  4.50 Gbits/sec                  receiver

OpenWRT 19.07.3 1500MTU receiving from WAN, vmx3 NICs, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  47.5 GBytes  6.81 Gbits/sec  44252             sender
[  5]   0.00-60.00  sec  47.5 GBytes  6.81 Gbits/sec                  receiver

OPNsense 20.7.2 1500MTU receiving from WAN, vmx3 NICs, all hardware offloading disabled, default ruleset
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  6.83 GBytes   977 Mbits/sec  459             sender
[  5]   0.00-60.00  sec  6.82 GBytes   977 Mbits/sec                  receiver

I also notice that while doing a throughput test on OPNsense, one of the vCPUs is completely consumed. I did not see this behavior with Linux or pfSense on my testing, screenshot attached shows the CPU usage I'm seeing while the iperf3 test is running.

7
20.7 Production Series / Re: Unbound DNS query & assistance
« on: August 14, 2020, 04:08:37 pm »
Yes, I should have clarified that in my response. What I was trying to convey is that most people will choose one of the 3 available large providers. Google, CloudFlare, or Quad9.

Regardless of which one you prefer, I would only recommend configuring Unbound to use one provider at a time. If you do want to run multiple different providers, I would align them so that you aren't using a combination of filtered/unfiltered results so that you can keep the client experience consistent and potentially reduce troubleshooting if there's a DNS issue.

8
20.7 Production Series / Re: Unbound DNS query & assistance
« on: August 13, 2020, 04:41:46 pm »
Quote from: hsimah on August 13, 2020, 03:51:23 am
Can Unbound DNS probe every server I have listed and serve up the result which responded first? If so, how would I configure this?
This used to be possible with DNSMASQ, there was a separate ability to query sequentially, or with with a round robin style for all specified DNS servers.

However, for Unbound, I'm only aware of it using a round robin style query by default.

It's also worth noting, your config mixes two DNS providers with different use cases. Your Google DNS and CloudFlare DNS will do DNSSEC/DoT, but no filtering. Your Quad9 will do DNSSEC/DoT, and malware filtering. Due to the way Unbound will randomly query either one, you may get inconsistent results back to your clients. It's very likely that google may recommend one CDN location, while Quad9 may provide results for another. You'd be better off picking one of those two services only. Which one is another discussion entirely but, Quad9 has a much better stance on user privacy so I know which one I'd go with.  :)

9
20.1 Legacy Series / Re: Unbound Plus Plugin and DoT hostname validation?
« on: May 08, 2020, 03:54:59 pm »
Thanks for the reply. If it is helpful, I am happy to test future versions. I have a few OPNsense VMs in a lab that I can demo stuff on before I push it to production.

10
20.1 Legacy Series / Unbound Plus Plugin and DoT hostname validation?
« on: May 08, 2020, 06:11:18 am »
I had a question for @mimugmail or anyone else that may know how the Unbound Plus plugin is doing hostname validation for DoT implementations?

Currently, I'm using regular Unbound with the following entries in the Advanced section:
Code: [Select]
# TLS Config
tls-cert-bundle: "/etc/ssl/cert.pem"
# Forwarding Config
forward-zone:
name: "."
forward-tls-upstream: yes
forward-addr: 1.1.1.1@853#1dot1dot1dot1.cloudflare-dns.com
forward-addr: 1.0.0.1@853#1dot1dot1dot1.cloudflare-dns.com

I would like to convert to using Unbound Plus plugin and input my DoT servers there. However, it does not appear to use the hostname for validation? Only the IP and Port?

11
20.1 Legacy Series / Re: Packets are being ignored - why?
« on: April 21, 2020, 12:42:26 am »
You mention initial problems when migrating rule sets, how were these migrated originally? Did you have to manually re-create them in the GUI or was it some other method?

On my OPNsense test box and on my production box, when I connect via SSH and drop to a console, I get a full output of the rules present when I run "pfctl -sr". I'm running OPNsense 20.1.4 AMD64/OpenSSL.

12
20.1 Legacy Series / Re: Packets are being ignored - why?
« on: April 20, 2020, 04:02:20 pm »
It's hard to say without getting more detail on your rulesets (screenshots, including the "automatically generated" rules). Have you tried an external port scanner to see if you have other holes on the WAN interface that should not be open? Something like the ShieldsUP scanner on grc.com can be useful to make sure that the rule you want to work is actually doing what its supposed to do.

What I can say is that by default, OPNsense will block all unsolicited incoming connections, just like pfSense does. So I suspect this is less of an OPNsense issue and more of a tweaking issue that will need to be reviewed line by line to find the offending rule.

13
20.1 Legacy Series / Re: Install files verification fails
« on: April 14, 2020, 09:19:58 pm »
I downloaded just now from the same mirror in your first post and the filehash appears to match for me. This is on a windows box without openssl so I can't run the other verification steps that you list.

Code: [Select]
Get-FileHash .\OPNsense-20.1-OpenSSL-vga-amd64.img.bz2 -algorithm sha256

Algorithm       Hash                                                                   Path
---------       ----                                                                   ----
SHA256          019A877C4B4CB96CFDA62D041774A91C030C5A8ECD58F8C3FD0067C7AC392982       D:\downloads\OPNsense-20.1-Op...

PS D:\downloads> cat .\OPNsense-20.1-OpenSSL-checksums-amd64.sha256
SHA256 (OPNsense-20.1-OpenSSL-dvd-amd64.iso.bz2) = 4b15e9b3d72732d325c5eaf46ba34575d4de8cdc3e3ac1b10666c7372563be6d
SHA256 (OPNsense-20.1-OpenSSL-nano-amd64.img.bz2) = 27544a78ae03d480a483cfd2e7cfa703b60e50938a1ed188ec3ccde6c426fefe
SHA256 (OPNsense-20.1-OpenSSL-serial-amd64.img.bz2) = f93bbcbe92059c5de49f22d485da292952b48658a28d1cdaf83191e8c95c03c2
SHA256 (OPNsense-20.1-OpenSSL-vga-amd64.img.bz2) = 019a877c4b4cb96cfda62d041774a91c030c5a8ecd58f8c3fd0067c7ac392982

14
20.1 Legacy Series / Re: Default route persists when upstream gateway down
« on: April 14, 2020, 05:30:10 am »
I presume this is a scenario in which you have multiple gateways defined within the router and you want the router to switch to a new gateway if another one fails? Do you have any gateway groups and gateway weighting defined yet? That should accomplish what you want if you have multiple WAN interfaces and you want one of them to fail over if one is marked "down".

Another option that is helpful for multiple WAN (gateways) is to enable state killing on gateway failure. This prevents some clients on the LAN from re-using an existing connection through a gateway that has failed. You can set this under Firewall/Settings/Advanced/Gateway Monitoring.

15
Hardware and Performance / Re: Constant high load on idle install
« on: February 01, 2020, 08:10:56 pm »
I saw similar issues on a fresh virtualized install. In my case, I was also seeing pflog0 promiscuous enabled/disable messages spamming the logs many times per second. This seemed to be related to IPV6 unable to pull a prefix delegation on the WAN interface of the OPNsense VM.

Try disabling IPV6 on WAN and see if this clears up? If so, it's likely related to the issue I saw in my LAB.

Pages: [1] 2 3 ... 8
OPNsense is an OSS project © Deciso B.V. 2015 - 2021 All rights reserved
  • SMF 2.0.17 | SMF © 2019, Simple Machines
    Privacy Policy
    | XHTML | RSS | WAP2