Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - opnfwb

#301
Yes, I think you are technically correct. The reason I mentioned fq_codel is because I was a longtime (10+ years) user of the other *sense product and I found their traffic shaping options to be frustrating and always with a bad compromise.

I actually switched to OPNsense primarily because FQ_codel was included early on and it is has greatly simplified bandwidth shaping on my networks. You can still create individual pipes and rules to push traffic around and use the bandwidth limits to carve out bandwidth for your desired network traffic. But, using fq_codel as the scheduler often results is much less headache with dropped traffic, at least that has been my experience.

This has kind of reset how I look at traffic shaping. Using the old *sense product, it had strict queue limits and bad schedulers, which forced me to be overly precise with traffic shaping and I'd always lose something when I gained something. With fq_codel, I can take a flatter approach and have more basic queuing and still get better QoS on my network. Sorry for the long reply and I apologize if this isn't exactly on topic but, I thought I would mention it to see if you can try it and maybe it will help in your case too? I realize this won't be for everybody but in my use cases, it has helped me a lot.
#302
This is not exactly on topic but I was curious if you could try this and report back. I've been shocked at how well OPNsense's FQ_Codel implementation works with virtually no tuning or knobs required.

Edit the Pipe, change the scheduler type to "FlowQueue-CoDel" and leave all the other options unchecked. Set your desired bandwidth limit as per usual. Save/Apply these changes and re-run tests. I've been pleasantly surprised at how well FQ_Codel manages bandwidth and also results in minimal packet loss during periods of congestion.
#303
Cloudflare is having issues presently with TLS. I'm using Quad9 DNS TLS and it's been working.

https://community.cloudflare.com/t/1-1-1-1-was-working-but-not-anymore/15136
#304
Quote from: RickNY on April 04, 2018, 04:47:16 PM
The primary difference here being that the instructions there also included the "do-tcp: yes" directive..
This is a good observation, I noticed this as well when I was setting this up yesterday and reading the Calomel Unbound tutorial. However, I checked the default unbound.conf file and it already included "do-tcp: yes" in the config on my box so I assumed it was baked in to OPNsense already. It probably doesn't hurt to list it again in the advanced options but in my case, it was not necessary because it was already included in the baseline config.

For those interested, this is my unbound.conf file, you can see the Advanced options appended to the bottom by OPNsense for the DNS/TLS servers. I'm only using Quad9 at the moment.

Also worth noting, my unbound.conf also includes additional tweaks that were configured via Services/Unbound/Advanced. So it may look a little different that a 100% stock file but, the do-tcp: yes value was there even before customization.
##########################
# Unbound Configuration
##########################

##
# Server configuration
##
server:
chroot: /var/unbound
username: unbound
directory: /var/unbound
pidfile: /var/run/unbound.pid
use-syslog: yes
port: 53
verbosity: 1
hide-identity: yes
hide-version: yes
harden-referral-path: no
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes
do-daemonize: yes
module-config: "validator iterator"
cache-max-ttl: 86400
cache-min-ttl: 0
harden-dnssec-stripped: yes
serve-expired: yes
outgoing-num-tcp: 20
incoming-num-tcp: 20
num-queries-per-thread: 1024
outgoing-range: 2048
infra-host-ttl: 900
infra-cache-numhosts: 10000
unwanted-reply-threshold: 0
jostle-timeout: 500
msg-cache-size: 100m
rrset-cache-size: 200m
num-threads: 4
msg-cache-slabs: 4
rrset-cache-slabs: 4
infra-cache-slabs: 4
key-cache-slabs: 4

auto-trust-anchor-file: /var/unbound/root.key
prefetch: yes
prefetch-key: yes
# Statistics
# Unbound Statistics
statistics-interval: 0
extended-statistics: yes
statistics-cumulative: yes

# Interface IP(s) to bind to
interface: 0.0.0.0
interface: ::0
interface-automatic: yes



# DNS Rebinding
# For DNS Rebinding prevention
#
# All these addresses are either private or should not be routable in the global IPv4 or IPv6 internet.
#
# IPv4 Addresses
#
private-address: 0.0.0.0/8       # Broadcast address
private-address: 10.0.0.0/8
private-address: 100.64.0.0/10
private-address: 127.0.0.0/8     # Loopback Localhost
private-address: 169.254.0.0/16
private-address: 172.16.0.0/12
private-address: 192.0.0.0/24    # IANA IPv4 special purpose net
private-address: 192.0.2.0/24    # Documentation network TEST-NET
private-address: 192.168.0.0/16
private-address: 198.18.0.0/15   # Used for testing inter-network communications
private-address: 198.51.100.0/24 # Documentation network TEST-NET-2
private-address: 203.0.113.0/24  # Documentation network TEST-NET-3
private-address: 233.252.0.0/24  # Documentation network MCAST-TEST-NET
#
# IPv6 Addresses
#
private-address: ::1/128         # Loopback Localhost
private-address: 2001:db8::/32   # Documentation network IPv6
private-address: fc00::/8        # Unique local address (ULA) part of "fc00::/7", not defined yet
private-address: fd00::/8        # Unique local address (ULA) part of "fc00::/7", "/48" prefix group
private-address: fe80::/10       # Link-local address (LLA)


# Access lists
include: /var/unbound/access_lists.conf

# Static host entries
include: /var/unbound/host_entries.conf

# DHCP leases (if configured)
include: /var/unbound/dhcpleases.conf

# Domain overrides
include: /var/unbound/domainoverrides.conf

# Unbound custom options
ssl-upstream: yes
forward-zone:
name: "."
forward-addr: 9.9.9.9@853
forward-addr: 149.112.112.112@853
forward-addr: 2620:fe::fe@853




###
# Remote Control Config
###
include: /var/unbound/remotecontrol.conf
#305
Older Intel NIC chipsets will use the EM driver, whereas newer NIC chipsets will use the IGB drivers. The IGB drivers/chips have more offload features and tweaks compared to the EM driver however, both NICs are good quality and will easily support 1000/1000 connections without overwhelming the processor.

The wattage calculation seems quite high. If you leave powerD enabled and use an efficient power supply, I would expect an idle using somewhere between 20W to 40W, maybe 50W at a maximum under full load.

You may be better off sourcing the all-in-one Qotom firewall options that are passively cooled and use 12VDC power supplies for better efficiency and space management.

For reference, a current overkill OPNsense box of mine has an Intel i3-4130, 16GB DDR3, 100GB 2.5inch 5400RPM HDD, and a quad port Intel 82580 NIC. The whole thing is powered with an ATX 110V 80+ Bronze 280W power supply. Power consumption is 30W.

Here is the manpage for the EM driver and supported chipsets: https://www.freebsd.org/cgi/man.cgi?query=em&apropos=0&sektion=0&manpath=FreeBSD+11.1-RELEASE&arch=amd64&format=html

Here's the manpage for the IGB driver and supported chipsets: https://www.freebsd.org/cgi/man.cgi?query=igb&apropos=0&sektion=0&manpath=FreeBSD+11.1-RELEASE&arch=amd64&format=html
#306
Call out for testing DNS over TLS with the new Quad9 and Cloudflare DNS servers that have been discussed recently. I wanted to see if we could get the default Unbound instance in OPNsense to use these new DNS encrypted and privacy oriented DNS providers.

I'm currently using these and this appears to be working because I can see all of the outbound queries in the pfTop view on OPNsense. I see outbound DNS queries on port 853 going to the addresses that I have specified in the custom options. Internal LAN queries come in over port 53 as per usual but outbound queries to the WAN now happen on Port 853 to the DNS TLS providers listed below.

Here are the settings I have configured to get Unbound to send DNS over TLS to Quad9 and Cloudflare.

OPNsense x86_64 18.1.5
UnboundDNS/General
Enable DNS resolver (checked)
Enable DNSSEC support (checked)
Enable Forwarding mode (UNCHECKED, had to do this to get these to work)

Paste these values in to the custom options field. Save/Apply settings.
Custom Options:
ssl-upstream: yes
forward-zone:
name: "."
forward-addr: 9.9.9.9@853 #Quad9 ip4
forward-addr: 149.112.112.112@853 #Quad9 ip4
forward-addr: 2620:fe::fe@853 #Quad9 ip6
forward-addr: 1.1.1.1@853 #Cloudflare ip4
forward-addr: 1.0.0.1@853 #Cloudflare ip4
forward-addr: 2606:4700:4700::1111@853 #Cloudflare ip6
forward-addr: 2606:4700:4700::1001@853 #Cloudflare ip6


You should now have DNS queries going to Port 853 using TLS to the addresses specified in the custom options field. Obviously, if you aren't using ipv6, you can omit some of the addresses. If you only want to use Quad9 or Cloudflare, you can omit whichever provider you don't want to use.
I'd love to have other folks try this out and report their findings.

As far as I can tell this seems to be working very well and it was quite easy to configure. However, I don't consider myself an "advanced" user and I would like to see feedback from others here just to ensure that this is a good setup to use going forward.
#307
18.1 Legacy Series / Re: em0 down for no reason
March 17, 2018, 01:01:30 PM
I have posted a reply in the 2nd thread with suggestions on some performance tweaks/settings for the EM driver. Let us know if this helps to resolve the issue?

https://forum.opnsense.org/index.php?topic=7580.msg34822#msg34822
#308
There are a few things that you can check to rule out a faulty NIC. I have used dual port EM Intel cards with the 82571EB chipset and have had very good reliability from these cards with a little bit of tweaking, which I'll outline below. From your description it seems like you're running the same identical card, so hopefully this helps you get up and running with stability.

First, lets ensure that the NIC has a unique IRQ and is not sharing IRQs.
Run this command and let us know the results:
Quotevmstat -i

You can also try to applying some EM driver specific tuning variables to help improve performance and stability of EM series NICs. I've documented these settings below if you want to try them, you will need a reboot to fully apply all of these once you've saved them. I have included lines for a dual port EM config, if you have more ports you will need to adjust some of the values below to match your system.

In /boot/loader.conf.local (you may need to create this file if it isn't already present):

hw.em.num_queues=0
hw.em.txd="2048"
hw.em.rxd="2048"
net.link.ifqmaxlen="4096"
hw.em.enable_msix=1
hw.pci.enable_msix=1
dev.em.0.fc=0
dev.em.1.fc=0
hw.em.rx_process_limit="-1"
hw.em.tx_process_limit="-1"


In WebGUI System/Settings/Tunables, add one line each with the following:

dev.em.0.eee_control: 0
dev.em.1.eee_control: 0
dev.em.0.fc: 0
dev.em.1.fc: 0


It's worth noting that I've only used these settings with Intel cards on Intel based systems with Intel chipsets/CPUs. This should not matter however, I haven't tried any of these tweaks with AMD based systems and their different chipsets. Depending on BIOS settings and various hardware differences, some of these settings may need some adjustment to fit your environment. Give them a try and let us know the results. You may find that you'll need to set the MSI-X variables to zero (disabled) depending on how the chipset in the router prefers to handle interrupts.
#309
Just remembered another thing worth checking would be traffic shaping. Since you're using a new install, this probably isn't an issue. But a traffic shaper can influence those speed tests.

When I had AT&T gigapower and I enabled fq_codel without any tuning, it would actually look exactly like your OPNsense speed results. Downloads would be the 600s, uploads were 700s to 800s. If you have any traffic shaping enabled, turn that off first and see what happens.
#310
Based on the screenshots I'm assuming this is AT&T gigapower? I had gigapower for a while and used OPNsense with it and constantly had 941/941 speeds. However, my setup isn't exactly like yours but, I do run intel NICs.

I would suggest simple things first from a tuning standpoint to see if you can rule out issues with the way OPNsense is using the NICs on that Protectli box. Forum member dcol has an excellent Intel NIC tuning guide posted in the IDS section: https://forum.opnsense.org/index.php?topic=6590.0

The main things I would check would be to make sure your WAN and LAN NICs have different IRQs, make sure they're using MSI-X, and turn off flow control on both of those NICs (all of these items are covered in dcol's guide linked above). Beyond that, are you using the AT&T provided gateway in DMZ passthrough, or did you bypass the gateway completely? I personally bypass the AT&T gateway when I had gigapower and just plumbed OPNsense WAN port directly to the fiber PON. This made troubleshooting and speed tweaking a lot simpler because I wasn't relying on the AT&T equipment to keep up.
#311
The Sempron 2200 is a 14 year old CPU. This Socket A platform only supported PCI and AGP connectivity.

A PCI NIC will not be able to push full gigabit speeds, and will operate even slower when sharing the PCI bus with other traffic intensive devices (sound cards, other NICs, etc.).

The N3060 that you tested is orders of magnitude faster and uses PCI Express connectivity for all of the NICs, allow full duplex gigabit traffic on all ports. If you want to go faster, you'll need to upgrade your OPNsense hardware platform. You won't need to spend much, a $50 Intel Core2 Dell desktop from ebay and a dual port or quad port Intel PCI-e NIC will do full gigabit with NAT.
#312
Thank you Bart and Franco. I was hoping I would not have to erase the data.

Performing the steps that Bart outlined has the Insight graphs working again, albeit with no history.

Is there any chance ZFS would have prevented something like this corruption from happening? The other unmentionable firewall that ends with "sense" has ZFS filesystem but, I refuse to use that product. :) Any chance that ZFS is coming to our beloved OPNsense?
#313
Greetings, unfortunately an extended power loss caused my UPS battery to fully drain and my OPNsense box lost power as a result. The OPNsense box boots up fine after the outage however, I noticed that the NetFlow/Insight graphing feature is no longer working.

I checked the logs and noticed this error:
Jan 17 11:34:07 flowd_aggregate.py: flowd aggregate died with message Traceback (most recent call last): File "/usr/local/opnsense/scripts/netflow/flowd_aggregate.py", line 148, in run aggregate_flowd(do_vacuum) File "/usr/local/opnsense/scripts/netflow/flowd_aggregate.py", line 79, in aggregate_flowd stream_agg_object.add(flow_record_cpy) File "/usr/local/opnsense/scripts/netflow/lib/aggregates/interface.py", line 70, in add super(FlowInterfaceTotals, self).add(flow) File "/usr/local/opnsense/scripts/netflow/lib/aggregate.py", line 260, in add self._update_cur.execute(self._insert_stmt, flow) DatabaseError: database disk image is malformed
Jan 17 11:31:08 configd.py: [fe7f64c6-c9d4-4c36-b09b-e086ab0df1c1] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:31:06 configd.py: [1d50babb-6223-4bc8-a773-971ae9d3a83e] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:30:48 configd.py: [cbd6107d-581c-4bee-9635-ca0ac8756cb0] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:13:09 configd.py: [05bd1a67-c19b-44a7-bea4-85cb83f2064f] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:12:59 configd.py: [a58562cc-04a9-444a-b5cb-0aff086f3b3c] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:12:56 configd.py: [3ce79efa-f2be-4fa6-ba24-7d129adda701] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:12:53 configd.py: [196a3b94-54d7-403a-a91b-541b0b55a882] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:12:51 configd.py: [945799ac-c0d0-4ba3-9ea9-9915654bcc32] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:12:48 configd.py: [11080cc3-73aa-488b-aa4a-35370304ba3c] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:11:53 configd.py: [dd158559-3de8-419c-a2ab-7ab6348d8ad8] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 11:11:27 pkg: flowd reinstalled: 0.9.1_3 -> 0.9.1_3


I have tried re-installing the the FlowD package but this does not fix the issue.

If I click on NetFlow and then click on "Apply" in an attempt to reset NetFlow, I see the following log output:
Jan 17 13:56:42 flowd_aggregate.py: flowd aggregate died with message Traceback (most recent call last): File "/usr/local/opnsense/scripts/netflow/flowd_aggregate.py", line 148, in run aggregate_flowd(do_vacuum) File "/usr/local/opnsense/scripts/netflow/flowd_aggregate.py", line 79, in aggregate_flowd stream_agg_object.add(flow_record_cpy) File "/usr/local/opnsense/scripts/netflow/lib/aggregates/interface.py", line 70, in add super(FlowInterfaceTotals, self).add(flow) File "/usr/local/opnsense/scripts/netflow/lib/aggregate.py", line 260, in add self._update_cur.execute(self._insert_stmt, flow) DatabaseError: database disk image is malformed
Jan 17 13:56:40 configd.py: [263dcf1e-5cc6-47f5-b017-eae4161ad501] request netflow data aggregator top usage for FlowInterfaceTotals
Jan 17 13:56:40 configd.py: [ab58cec3-e55c-41f0-b84c-9f0f30b28be7] request netflow data aggregator metadata
Jan 17 13:56:40 configd.py: [3691858b-b78f-4673-b964-7d9aba0a150f] request netflow data aggregator top usage for FlowDstPortTotals
Jan 17 13:56:40 configd.py: [a555a9e0-31ba-4378-8b2f-4ea3fddaf771] request netflow data aggregator top usage for FlowInterfaceTotals
Jan 17 13:56:40 configd.py: [5300170f-445a-45be-804e-66d765d297fa] request netflow data aggregator top usage for FlowSourceAddrTotals
Jan 17 13:56:40 configd.py: [1f3cc143-1c76-404f-9bf6-5021ef6a211c] request netflow data aggregator timeseries for FlowInterfaceTotals
Jan 17 13:56:36 configd.py: [9fd5aec5-5f19-477a-bcb6-b2df58b6034a] restart netflow data aggregator
Jan 17 13:56:36 configd.py: [42b348f2-e423-4bfc-9d23-7133f065c5ce] request status of netflow collector
Jan 17 13:56:34 configd.py: [c1c38d21-f5d0-4553-9d9e-fa82e5d6bd17] start netflow
Jan 17 13:56:34 configd.py: [cb188c53-dec4-4fa6-a181-235c1a7b52bf] stop netflow


What else can I check or reinstall to get NetFlow working again? This is on OPNsense 17.7.11.
#314
17.7 Legacy Series / Re: Comcast Business 10.1.10.1
January 16, 2018, 06:05:08 AM
Try unchecking the "Block Private Networks" option on the WAN side and see if this helps?
#315
17.7 Legacy Series / Re: Comcast Business IPv6 Setting
January 16, 2018, 06:02:44 AM
I don't have exactly the same setup but, have you tried Prefix 64 on the WAN-side DHCP6 client?

I'm running Prefix 64 DHCP6 on Spectrum/TWC and am getting WAN and LAN IP6 addresses and firewalling/routing is working quite well.