OPNsense
  • Home
  • Help
  • Search
  • Login
  • Register

  • OPNsense Forum »
  • Profile of bb-mitch »
  • Show Posts »
  • Topics
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts...
      • Messages
      • Topics
      • Attachments

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments

Topics - bb-mitch

Pages: [1]
1
20.7 Legacy Series / understanding packet graph...
« on: January 14, 2021, 11:28:38 pm »
Please see image attached. from Reporting -> Health > Packets Lan

Trying to look for a problem - not sure what the problem is. ipv6 is disabled on the interface, and yet, inpass6 shows 140m which I presume means 140 million pps??? how is that possible?

On the same graph, LAN inblock ranges from 3.0 to over 350m - but with ippass6 showing values I'm worried I can't trust those numbers?

Any ideas what's happening?

Thank you!

2
20.7 Legacy Series / opnsense / pfctl bug?
« on: January 13, 2021, 08:59:54 pm »
In the "olden days" clicking the X next to a state in opnsense / pfsense worked. the state was gone - of course if the internal host continues to send a traffic a new state will be created (on a different NAT port), which will fail to reach the end host. That's ok... but at least one could kill those states.

Looking for a solution I found a related issue on pfsense... https://forum.netgate.com/topic/107208/pfctl-k-id-not-working/7

Basically what it comes down to is that the states panel doesn't seem to kill states like it should. It USED to work.

Regularly we used to use this function to search out and kill states for a particular client to affect changes like a new NAT target etc. but it hasn't worked in the past long while - only large scale (like IP filter, click KILL button) have worked (which doesn't work in this case for us. we might want to drop a SIP registration without dropping a call - we only want to kill a single mapping or the mappings in a related group.

The pfsense thread seems to identify the issue - although he was using pfctl directly.

In short:
pfctl -s state -vv

produces a list of states like:
all udp 10.x.x.x:ppppp (66.x.x.x:PPPPP) <- 216.x.x.x:RRRRR       NO_TRAFFIC:SINGLE
   age 00:00:04, expires in 00:00:56, 1:0 pkts, 32:0 bytes, rule 104
   id: 010000005cb3317b creatorid: 9171c710

It's these last two numbers that are key. I believe the docs on pfctl make it look like you can kill a state like this:
pfctl -k id -k 010000005cb3317b

But in reality it requires both the id and the creator:
pfctl -k id -k 010000005cb3317b/9171c710

I think this is likely a bug in both pfsense and opnsense, but people who need it have just been working around it.

Does what I'm suggesting make sense?

3
20.1 Legacy Series / Is there a reason max-src limits are linked to virusprot by default and always?
« on: July 02, 2020, 10:19:37 pm »
I was trying to test out something I've tried before... hoping it had changed. And locked myself out again  ::) :P

I'm wondering if there's a reason things are the way they are, or if any of the powers that be can see a reason NOT to support a simple change I'm requesting. I can find ways to work around it. I just think it might make the feature more widely useful if there was some flexibility in the way the feature is coded.

I'm making this thorough so it's a helpful reference to any who read it later even if nothing changes.

When you want to set maximum limits for TCP connections you have the following field options (from the pf man page (https://www.freebsd.org/cgi/man.cgi?query=pf.conf&sektion=5&n=1) :

Quote
max-src-nodes <number>
Limits the maximum number of source addresses which can simultaneously have state table entries.

max-src-states <number>
Limits the maximum number of simultaneous state entries that   a single source address can create with this rule.

For stateful TCP connections, limits on established connections (connections which have completed the TCP 3-way handshake) can also be enforced per source IP.

max-src-conn <number>
Limits the maximum number of simultaneous TCP connections which have completed the 3-way handshake that a single host can make.

max-src-conn-rate <number> / <seconds>
Limit the rate of new connections over a time interval.  The connection rate is an approximation calculated as a moving average.
There is a section in the docs:
https://docs.opnsense.org/manual/firewall.html#connection-limits

However, I think it's missing some information / explanation including a section / reference to Firewall -> Diagnostics -> pfTables

IF you enable any of those options you need to know triggering them results in black listing.
To manage it:

  • Navigate to: Firewall -> Diagnostics -> pfTables
  • Select the virusprot table
  • Remove any IP you need to unblock


From the posts I've seen, it seems like that keeps catching people and it's not hard to understand why...

At the beginning of the firewall rules there are a couple of important lines:

Code: [Select]
table <virusprot>This sets up the table.

Code: [Select]
block in log quick from {<virusprot>} to {any} label "8e36..." # virusprot overload tableThis results in anything listed in that table being blocked in spite of later rules.

When you add some state limits, the rule gets tagged with them like this:
Code: [Select]
max-src-conn 1 max-src-states 10 tcp.established 120 max-src-conn-rate 1 /1, overload <virusprot> flush global

The tricky part (and what I'm wondering about changing / making allowance for customization) is the part at the end:
overload <virusprot> flush global

If we review the manual for pf again:
Quote
Because the 3-way handshake ensures that the source address is not being spoofed, more aggressive action can be taken based on these limits. With the overload <table> state   option, source IP addresses which hit either of the limits on established connections will be added to the named table. This table can be used in the ruleset to block further activity from the offending host, redirect it to a tarpit process, or restrict its bandwidth.

The optional flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits. The global modifier to the flush command kills all states originating from the offending host, regardless of which rule created the state.

For example, the following rules will protect the webserver against hosts making more than 100 connections in 10 seconds.  Any host which connects faster than this rate will have its address added to the <bad_hosts> table and have all states originating from it flushed. Any new packets arriving from this host will be dropped unconditionally by the block rule.

      block quick from <bad_hosts>
      pass in on $ext_if proto tcp to $webserver port www keep state \
         (max-src-conn-rate 100/10, overload <bad_hosts> flush global)[/code]

What I'm suggesting is under the advanced settings could there be a list of tables so you could optionally select one? If you wanted to, the default could still be virusprot which would preserve the default behavior?

The flush and global options could be default checked, but allowed to be unchecked.

WHY AM I ASKING?

Consider if I added a rule to rate limit access to a webserver... maybe used for provisioning. A large site with a power failure COULD trip the max https requests at once, but the ability to change the table name, and to not include flush and global could allow the working phones to continue to work. The existing connections to continue to download their payload.

The way it is, the tripwire is all or nothing - and prevents many people from using the connection rate limiting function unless they are prepared to lose all connectivity with any host that exceeds the limit.

limiting connections (i.e. to an SMTP service might be desirable (resulting in a timeout on the sender) instead of black listing the IP like currently happens.

In short one suggestion and one suggestion for a change (improvement?).
1) Update the doc with a reference to how to fix it when you enable those features?
2) Make the table name selectable, and the flush and global options optional.

What do you think? Happy to make a donation to the effort!  ;D

Thanks in advance for your consideration and feedback.

m


4
18.7 Legacy Series / Latest 18.7.6 selective states kill seems broken
« on: November 05, 2018, 07:09:51 am »

In my configuration, I have two hosts using HA/CARP.

On primary / carp master, go to Firewall -> Diagnostics -> States Dump

Filter on an IP. Press Kill.

Refilter on the same IP, the states do not seem to be cleared.

Pressed X on each state, and then filter on the same IP.

The states do not seem to be cleared.

I took the host in question offline. Repeated the process. I did this to ensure the host was not re-establishing the states before I could see them deleted.

So then I complete reset the states with Firewall -> Diagnostics -> States Reset

Now the states are gone. I haven't had to do this often, but I'm pretty sure this worked properly in 18.1.x - is there something wrong with my procedure?

Thanks!

M

5
18.7 Legacy Series / CARP issue (?) or my error - seems to exist since before 18.1 to present
« on: October 30, 2018, 11:21:06 pm »
We have a pair of opnsense configured with high availability / carp and recently noticed an odd behavior.

One of our Virtual IP's was intermittently not responding to pings. About 10 seconds "on" / 20 seconds "off" - pretty regular. The issue only applied to a single IP. We could not see any CARP traffic on the public network, but we found a way to "fix" the problem - by changing the VHID to a different number, the problem went away.

The virtual IP in question does have a password (long and complex random string).

The base / skew is 1 / 0.

I would have expected any competing broadcasts for this VHID would have not been accepted by our router due to the mismatched password.

And yet somethign seems to be "stealing" our address - I don't see the carp mode changing to backup on the primary but perhaps I'm having trouble catching it?

I did run packet captures on the WAN - and although I couldn't see traffic to indicate that's what was happening, I think the symptoms would indicate that's the cause?

By changing the VHID of that one Virtual IP, I can work around the issue. If I change teh VHID back the issue returns. I'd like to resolve the issue permanently - I'm on the latest release firmware.

Can anyone recommend any next steps?

Thanks in advance :-)

6
16.7 Legacy Series / MLPPP performance problem [NOT SOLVED] Bounty?$
« on: November 29, 2016, 03:15:08 am »
We are in the process of converting a lot of our routing over to opnsense from pfsense...

And I think we may have another candidate. But I'm trying to avoid making my life harder ;-)

We have an MLPPP install which is causing us grief. The safe choice was to set up MLPPP on the hardware which had pfsense, but now we have a rather bizarre issue - maybe changing to opnsense will correct it if the issue results from the config?

The MLPPP setup "works", but the recommendations from the ISP were to have unique per path logins. There are config notes for both an RB750, and a Cisco 871k9 - but not for pfsense or opnsense (yet) - we've used these type of setups before and have not previously noticed this issue - but I can't say for sure it hasn't happened.

The issue seems to be:
  • traffic to the router (iperf) works at full MLPPP bonded speed - no loss or issues
  • traffic using FTP (on the router to a remote server) seems to load balance the upstream, but the downstream only seems to use a single connection
  • traffic from behind the pf sense nat (also FTP) seems to download a little slower, but the upload is horrible (10-50%) of the upload and variable when the FTP is done on the router

The Cisco style config involves OSPF config, etc. the RB750 config seems more akin to what pf / opn expects

Here is the thread - https://forum.pfsense.org/index.php?topic=121781.0

Does anyone think this issue would be addressed by a change?

Can we prove that by hacking the pf config?

Thank you!!

Mitch

7
16.7 Legacy Series / PPTP / MPD performance issue
« on: October 13, 2016, 12:43:10 am »
First... I know PPTP is not secure - believe me we aren't counting on this for security - only bonding public traffic. 8)

We are close IF this is possible, so bear with me - here are all the details.

Background / goal: We need to create a bonding solution to improve performance and bond multiple similar links (or potentially dissimilar links) into a single higher speed link. MLPPP might be ideal except that the connections are over the Internet / not bridged Ethernet so as far as I understand that’s not possible. We settled on using the MPD plugin on a head end box, and configuring it to accept the PPTP traffic, bundle it, and NAT the traffic to the internet.

We tried this before but when it didn’t work we thought our understanding of the traffic splitting of minor differences in latency were to blame – the new system has two connections as close to the same performance as we can manage and practically maintain.  If we knew what we had to change we could though.

From my understanding, there are a few options...
PPTP / MPD (which we are trying)
MLPPP which is PPPoE and requires bridged connectivity (which we can't always have)
OpenVPN (which is another idea, but it's user space and probably lower performance than the MPD from what I read?

If you have another suggestion, based on open tool chains we're willing to look. opnsense is something we're comfortable with so we're looking to leverage knowledge already at hand.

The good news is, it works – we can see the multiple connections – the 2 WAN connections on the remote end seem to be used similarly (balancing). The bad news is that the performance isn’t great. It doesn’t make anything seem better – but worse… And I’m not sure why or where the limit is – in our config or in our understanding. Nothing we can see in terms of bandwidth, cpu, etc. seems to be saturated on ither end.

Network / Client side: The two connections are virtually identical VDSL services with 50Mb/s downstream and 10Mb/s upstream.

Hardware / Client side:
AMD G-T40E Processor (2 cores) / 4GB RAM
OPNsense 16.7.5-i386
FreeBSD 10.3-RELEASE-p9
OpenSSL 1.0.2j 26 Sep 2016

Network / Server side: The server side is connected by Gb and is capable of bursting to the full speed.

Hardware for the MPD Server:
AMD GX-412TC SOC (4 cores) / 4GB RAM
OPNsense 16.7.5-amd64
FreeBSD 10.3-RELEASE-p9
OpenSSL 1.0.2j 26 Sep 2016

Setup:

On the client, we created a PPTP interface and used the two existing WAN’s as link interfaces. We put the server gateway IP in both interface gateways and left the “Local IP”s blank. Then we added a lan rule to route a PC out over this connection. When it browses the internet, the public IP of the MPD server is seen as it’s IP.

On the server, we:
-   Installed the MPD/PPTP plugin, enabled the PPTP server (VPN->PPTP->Settings) and set
-   No. of PPTP users (16)
-   Server address (Server WAN IP)
-   Remote address range (private IP for tunnel start range)
-   PPTP dns servers
-   We added a couple users – we did not assign those users to specific IP addresses.
-   Then we started the PPTP service.

On the firewall rules we:
-   Allow all GRE traffic to WAN
-   Allow all TCP:1723 traffic to the WAN interface

The results:

Network / Performance: Speedtest.net on both links normally shows 50Mb/s or more down, and 10Mb/s or more up. Each link is about 12-15ms from the MPD server. The server is 1ms to 4ms from the speedtest.net test servers.

CPU idle percentage which is normally 85-90% idle does not change during test. Top does not show any noticeable CPU use during the speed tests.

I ran iostat during tests – the only noticeable change is a snall increase in interrupt time % (1 or 2%!)

The results are horrible though! We see a combined :
Download speed of 12 to 20Mb
Upload of 6-9Mb.


So why is it slow? Shouldn’t it be faster?

Using traffic graphs we can see the traffic is split into two streams, and recombined. But it almost feels like we are tunneling over a TCP socket waiting for acks for every packet... is that what's happening?

If so, how do we fix it?

I'm sure we must have missed something?

Thanks in advance!!

8
16.1 Legacy Series / fragments blocked - am I missing something?
« on: June 09, 2016, 12:30:55 am »
pfSense seems to have a similar / related issue: Bug #4723

What I am trying to do is capture UDP packets on the LAN - those packets should be reaching a remote service (albeit fragmented) as they are about 2000 bytes each. This is not my app or design - this is related to network monitoring and a function of network device hardware / firmware and beyond my control. So unfortunately "make the packets smaller" isn't an option ;-)

Ideally I want the packets passed out on the WAN, but I don't think they are arriving at their destination so I started working backwards.

To diagnose the issue - I tried doing a packet capture on the LAN interface. There is an allow rule, allowing the traffic and routing it to a gateway group. The packet capture reports that a packet was received, and reports a size around 1650 bytes, but when I download the packet capture it looks like only the first fragment was captured.

What I see in my capture is a fragmented IP packet (length 1514).

I never see the second fragment so I can't reassemble the packets in the capture.

I assume that IF the traffic isn't being blocked, that the outbound traffic is also likely missing the second fragment. That means of course that the end service never sees the traffic in a way that it can assemble / parse the data.

I've tried turning off scrubbing (System / Settings / Firewall/NAT / Disable Firewall Scrub). I've also tried checking "Clear invalid DF bits instead of dropping the packets" - that didn't help either. The firewall rules show TCP flag options - but no UDP flag options that might change the handling of UDP fragments...

Any ideas appreciated!

Thank you in advance!

Pages: [1]
OPNsense is an OSS project © Deciso B.V. 2015 - 2023 All rights reserved
  • SMF 2.0.19 | SMF © 2021, Simple Machines
    Privacy Policy
    | XHTML | RSS | WAP2