OPNsense
  • Home
  • Help
  • Search
  • Login
  • Register

  • OPNsense Forum »
  • Profile of iMx »
  • Show Posts »
  • Topics
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts...
      • Messages
      • Topics
      • Attachments

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments

Topics - iMx

Pages: [1]
1
General Discussion / Virgin Media UK - 'Super' Hub 4 - Modem Mode
« on: January 22, 2022, 09:00:03 am »
Recently upgraded my Virgin Media connection to Gig1 and with it (unfortunately) I was provided with a Hub 4 - from day 1, it has been a PITA.  It took me hours and hours to finally get an IP address from modem mode to opnsense.

Running the device in Modem Mode, it is extremely picky about when/if it will let opnsense have an IP - or the DHCP servers are - certainly with the default DHCP settings, even though the opnsense defaults request more frequently than the FreeBSD defaults.

There are various reports of problems, with people using opnsense, pfSense, Asus, you name it.  It seems to be 'pot luck' whether the device behind the modem gets an IP address or not, the suggested steps of:

- Power off the modem
- Restart your main firewall/router
- Give it a few minutes
- Power on the modem

.. seems to work in some cases, but not others.  I do firmly believe there is still an element of luck to whether this works or not. 

I read somewhere that if the DHCP request doesn't complete in a 15 second window, after the modem has booted, it basically ignores all other requests after.  This was quoted from a Virgin engineer, so who knows....

I thought I had cracked it last time I had problems, when I filtered 192.168.100.254 in 'Reject Leases From' - otherwise opnsense ends up getting a 192.168.100.x address instead.  Note, previous modems you needed to filter 192.168.100.1 (I believe).

Running tcpdump, filtering for DHCP requests/replies, the requests are being made, but just no response or not a completed response:

Code: [Select]
tcpdump -i eth0 port 67 or port 68 -e -n -vv
Last night the UPS on my cable modem died, opnsense failed over beautifully to 4G and until I heard a whining noise on my way to bed coming from the garage I didn't realise it had failed (Muted PushOver notifications after 10PM).  So I removed the UPS, powered on the cable modem and went to bed - this morning, it still hadn't got an IP and was still running on 4G.

I tried restarting the modem, and opnsense, using the basic procedure above that is reported to (sometimes) work.  But it didn't.

In the end, I discovered that Asus now has some '12Hz DHCP' option - to supposedly fix similar issues, where the device doesn't get an IP - which requests an IP 12 times a second (12Hz)?!?!

https://www.asus.com/my/support/FAQ/1043591/

Frustrated, I ended up putting '1' in every box in the DHCP options for the WAN interface - along with filtering 192.168.100.254 from DHCP replies - to try to make DHCP requests as frequent as possible.  Rebooted the Hub 4....and it got an IP address first time.

... I don't really want to test this again, just yet, but will update this post with any further developments next time I come across it.

Aside from this, does anyone have any idea how to replicate the Asus 12Hz option?  I do see some mention that others with Starlink and other cable modems also need to use this feature, so it doesn't seem to be Virgin specific.

2
22.1 Legacy Series / i7 9700 (presumed) CPU issues resolved with 22.1
« on: January 21, 2022, 08:11:47 am »
Just adding this here, for anyone in a similar situation...

Was previously running a Qotom J1900 for the last 7+ years, happily on 21.7.  Recently upgraded the firewall, now with an i7 9700 CPU.

Upgrade was easy enough, export the config, edit the interface names for the new device in the export file, import into the new box.  Initially started out with 21.7 on the new box.

But, whilst using PowerD with the J1900 was not a problem, with the i7 9700 it caused it to crash usually under a reasonably small amount of load - on 21.7 - fans would go to 100% and required physically power cycling. 

Turning off PowerD resolved the issue, but the CPU frequency (dev.cpu.0.freq) would always be at 1400 - even using 'stress' the CPU would seemingly not scale.

Upgrading to 22.1 which I believe uses hwpstate_intel instead for power management, resolves the issue. 

dev.cpu.x.freq when running stress shows it is also using up to the 'Turbo Boost' frequency '4497', so the CPU can work at maximum when required and drop back down (dev.cpu.0.freq: 897). 

Performance vs power can be tuned with dev.hwpstate_intel.*.epp

I did come across a FreeBSD post, where they were essentially saying it was legacy and should probably be removed - so I haven't attempted PowerD again, but will for testing again at some point.  So it seems 'ok' for older CPUs, but for more modern CPUs PState seems to be the way to go.

3
21.1 Legacy Series / Are your Shaper queue lengths set correctly?
« on: July 21, 2021, 05:01:08 pm »
Anyone reading this, if you're running the Shaper, on 21.1.8, could you test something for me?

- Edit the Pipe
- Enable Advanced
- Set Queue slots, to a value other than 50 (the default)
- Apply

On the CLI, if you run:

Code: [Select]
ipfw pipe show
Do you still only see the default queue lot size, of 50?

Code: [Select]
q75536  50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail
I had a look in the /tmp directory, to see if I can see the ruleset, but it didn't seem to be there.

I'm pretty sure in 21.1.7 the queues were set correctly based on the value entered in the UI.

4
19.1 Legacy Series / synproxy with NAT inbound, no advanced option?
« on: July 06, 2019, 05:28:23 pm »
Hi there,

I've got a few ingress NAT rules, port forwards, however I can't see how I can specify 'synproxy' as part of this? 

The rules that are automatically created are not editable, to locate the Advanced setting - and potentially enable synproxy there - and it doesn't seem to be possible to set this on the parent NAT rule?

Cheers,

5
19.1 Legacy Series / Suricata/IDS change, not automatically synced to secondary HA node
« on: March 09, 2019, 10:46:01 am »
Hi there,

Running 19.1.3 making a change to Suricata/IDS, for example changing from Default -> Hypersync pattern matcher, the configuration does not automatically trigger an xmlrcsync of the config to the secondary node.   

I cannot see any attempt for it to do so, I see the Suricata config reload/regeneration in the logs, but no automatic sync.

If I go to Firewall -> HA -> Status -> Synchronize config to backup, the change is replicated.  But it does not seem to trigger automatically.  I would assume it should do?  Is anyone else seeing this?

Other changes, firewall rules for example, DO automatically trigger a change.

6
General Discussion / Asymmetric routing after fail over - MultiWAN, but single IP on WAN1
« on: March 07, 2019, 01:38:39 pm »
I have the following HA setup, with multi-WAN, I’m hoping one of your clever people might be able to suggest a workaround.  Albeit probably a hacky work around, as I realise this is a hacky setup!

Node A
WAN1: 10.0.0.1/30 (RFC1918)
WAN2: 37.x.x.1/27 (Default GW)

Node B
WAN1: 10.0.0.2/30 (RFC1918)
WAN2: 37.x.x.2/27 (Default GW)

CARP:
WAN1 VIP: 78.x.x.1/30 (Single IP from ISP)
WAN2 VIP: 37.x.x.3/27

IP Aliases:
80.x.x.1/27 - WAN1 (routed by ISP to 78.x.x.1). Gateway set to WAN1 gateway (78.x.x.2/30)
80.x.x.2/27 - WAN1 (routed by ISP to 78.x.x.1). Gateway set to WAN1 gateway (78.x.x.2/30)
80.x.x.3/27 - WAN1 (routed by ISP to 78.x.x.1). Gateway set to WAN1 gateway (78.x.x.2/30)
...
80.x.x.30/27 - WAN1 (routed by ISP to 78.x.x.1). Gateway set to WAN1 gateway (78.x.x.2/30)

Gateways, monitoring disabled for both:
WAN1: 78.x.x.2/30
WAN2: 37.x.x.30/27 (Default GW)

- Single IP from the ISP for WAN1, which is configured as a CARP VIP with RFC1918 on Node A and Node B
- A further /27 subnet is routed to the single CARP IP for WAN1
- IP aliases are set up for the same VHID as WAN1 CARP
- WAN2 has the default gateway, all IP addresses are externally reachable/routable (non-rfc1918)

When I perform a fail over from Node A -> Node B, with state synced, the IP aliases fail over correctly.  However, the egress packets for 80.x.x.0/27 IP aliases are routed out of the default WAN2 gateway once failed over to Node B.  If I clear the state on the firewall, things then sort themselves out.

Presumably as there is no routing table entry for the WAN1 CARP gateway, it takes the default route after failover - which is egress via WAN2.  When I clear the state, it then routes correctly via pf.  If I disable state sync, the fail over happens, state is lost, but routing is correct - i.e in and out of WAN1 for 80.x.x.0/27.

Is the work around to just not sync state?  Or is there a way for state to be synced, but for it to route correctly egress via WAN1 for 80.x.x.0/27 immediately after fail over?


7
Tutorials and FAQs / Check_MK Agent setup
« on: February 27, 2019, 04:27:39 pm »
Quick overview for installing the check-mk agent - brain dump whilst I still have it in my shell history - I saw this was mentioned once before some time ago:

https://forum.opnsense.org/index.php?topic=1310.0

1. Create a new directory:

Code: [Select]
mkdir -p /opt/bin
2. Download the agent:

Code: [Select]
curl "http://git.mathias-kettner.de/git/?p=check_mk.git;a=blob_plain;f=agents/check_mk_agent.freebsd;hb=HEAD" -o /opt/bin/check_mk_agent
3. Make it executable:

Code: [Select]
chmod +x /opt/bin/check_mk_agent
4. Install bash and statgrab

Code: [Select]
pkg install libstatgrab bash
5. Add the following to /etc/inetd.conf

Code: [Select]
check_mk  stream  tcp nowait  root  /opt/bin/check_mk_agent check_mk_agent
6. Add the following to /etc/services:

Code: [Select]
check_mk        6556/tcp   #check_mk agent
7. Add the following, modify monitoring.server.ip.address as required, to /etc/hosts.allow

Code: [Select]
# Allow nagios server to access us
check_mk_agent : monitoring.server.ip.address : allow
check_mk_agent : ALL : deny

8. Start inetd

Code: [Select]
/etc/rc.d/inetd onestart
9. Add firewall rules as required to access tcp 6556

To Do: Make it start on boot, investigate a potential plugin to make it survive (major?) upgrades

8
17.7 Legacy Series / ICMP to L2TP 'WAN' IP fails
« on: January 22, 2018, 04:10:28 pm »
Hi there,

I have a cable connection with a dynamic IP, for that reason I also have an L2TP tunnel from another provider that provides static IP addresses over the tunnel.  At some point after 17.7.7 (I think, or there about) ICMP to the WAN IPs over the L2TP tunnel stopped providing a response, immediately after an upgrade.  Currently running 17.7.12.

I have the following rules permitting ICMP:

Code: [Select]
pass in  quick on l2tp1 reply-to ( l2tp1 c.c.c.c )  inet proto icmp from {any} to {(l2tp1)} icmp-type {echoreq} keep state label "USER_RULE"
pass in  quick on l2tp1 reply-to ( l2tp1 c.c.c.c )  inet proto icmp from {any} to $Host_Guest_WAN_IP icmp-type {echoreq} keep state label "USER_RULE"

The below is a dump from the l2tp1 interface, showing the response - a.a.a.a is an external server, b.b.b.b the WAN IP on the L2TP tunnel/interface:

Code: [Select]
12:05:24.304934 IP a.a.a.a > b.b.b.b: ICMP echo request, id 5384, seq 58, length 40
12:05:24.304959 IP b.b.b.b > a.a.a.a: ICMP echo reply, id 5384, seq 58, length 40
12:05:24.304970 IP a.a.a.a > b.b.b.b: ICMP echo request, id 5384, seq 59, length 40
12:05:24.304994 IP b.b.b.b > a.a.a.a: ICMP echo reply, id 5384, seq 59, length 40
12:05:24.305005 IP a.a.a.a > b.b.b.b: ICMP echo request, id 5384, seq 60, length 40
12:05:24.305030 IP b.b.b.b > a.a.a.a: ICMP echo reply, id 5384, seq 60, length 40

The below is a capture from the L2TP providers end (they provide the option to create 10 second pcaps from their portal), the response does not make it down the tunnel.

Code: [Select]
97 9.367539 a.a.a.a b.b.b.b ICMP 106 Echo (ping) request  id=0x10ea, seq=5/1280, ttl=57 (no response found!)
So far, I have tried:

- Disabling 'reply-to' on the rule.  When this happens the response does not go down the L2TP tunnel, i.e the response is not seen in the l2tp1 capture
- Tried setting a gateway to the L2TP tunnel on the rule, rather than default.  Did not resolve.

Only thing I haven't tried so far, as I need to get a maintenance window from the other half, is disabling shared forwarding - is this likely going to help? 

But as I say, this stopped post a 17.7.x upgrade, I think when I upgraded from 17.7.7 to 17.7.10.

Cheers,

Ed

9
17.7 Legacy Series / Traffic shaper, should I see my rules in 'ipfw -a list'?
« on: October 30, 2017, 02:20:15 pm »
So, followed a few of the FQ_Codel guides on here, I believe I had it working on an earlier 17.7 release - on the current, 17.7.7_1 I don't seem to be able to.

Something I'd just like to clarify, presumably I should see the Rules/ueues that I configure in the Traffic Shaper section, in 'ipfw -a list'?  I don't, if I should I can't for the life of me work out why.  ipfw rules below:

Code: [Select]
root@fw00:~ # ipfw -a list
00100       0          0 allow pfsync from any to any
00110       0          0 allow carp from any to any
00120       0          0 allow ip from any to any layer2 mac-type 0x0806,0x8035
00130       0          0 allow ip from any to any layer2 mac-type 0x888e,0x88c7
00140       0          0 allow ip from any to any layer2 mac-type 0x8863,0x8864
00150       0          0 deny ip from any to any layer2 not mac-type 0x0800,0x86dd
00200       0          0 skipto 60000 ip6 from ::1 to any
00201      44       9156 skipto 60000 ip4 from 127.0.0.0/8 to any
00202       0          0 skipto 60000 ip6 from any to ::1
00203       0          0 skipto 60000 ip4 from any to 127.0.0.0/8
01002      36       3560 skipto 60000 udp from any to 10.8.6.254 dst-port 53 keep-state
01002     117      13994 skipto 60000 ip from any to { 255.255.255.255 or 10.8.6.254 } in
01002     160      21192 skipto 60000 ip from { 255.255.255.255 or 10.8.6.254 } to any out
01002       0          0 skipto 60000 icmp from { 255.255.255.255 or 10.8.6.254 } to any out icmptypes 0
01002       0          0 skipto 60000 icmp from any to { 255.255.255.255 or 10.8.6.254 } in icmptypes 8
01003       0          0 skipto 60000 udp from any to 192.168.3.254 dst-port 53 keep-state
01003       0          0 skipto 60000 ip from any to { 255.255.255.255 or 192.168.3.254 } in
01003       0          0 skipto 60000 ip from { 255.255.255.255 or 192.168.3.254 } to any out
01003       0          0 skipto 60000 icmp from { 255.255.255.255 or 192.168.3.254 } to any out icmptypes 0
01003       0          0 skipto 60000 icmp from any to { 255.255.255.255 or 192.168.3.254 } in icmptypes 8
65535 9056022 8639833830 allow ip from any to any

I've follow the RickNY guide, below, multiple times, line for line, but I don't actually see any reduction in bufferbloat, nor in the downstream bandwidth (even if I set it to something stupidly low) suggesting something isn't matching.

https://forum.opnsense.org/index.php?topic=3758.0

Screenshot in the below post shows 'queue' rules in ipfw:

https://forum.opnsense.org/index.php?topic=4665.msg18072#msg18072

I don't seem to have these in my 'ipfw -a list' above, no matter what 'Rules' I configure in Firewall -> Traffic Shaper -> Settings -> Rules:

   
Code: [Select]
11 WAN ip 10.8.6.0/24 any DownQueue
21 WAN ip any 10.8.6.0/24 UpQueue



Pages: [1]
OPNsense is an OSS project © Deciso B.V. 2015 - 2023 All rights reserved
  • SMF 2.0.19 | SMF © 2021, Simple Machines
    Privacy Policy
    | XHTML | RSS | WAP2