OPNsense
  • Home
  • Help
  • Search
  • Login
  • Register

  • OPNsense Forum »
  • Profile of johndchch »
  • Show Posts »
  • Messages
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts...
      • Messages
      • Topics
      • Attachments

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments

Messages - johndchch

Pages: 1 2 3 [4] 5
46
Hardware and Performance / Re: Can OPNsense handle 100Gbit in a VMware environment? Tips to test Bandwith?
« on: January 11, 2022, 06:02:47 pm »
Quote from: Layer8 on January 11, 2022, 12:58:03 pm
@opnfwb: Yes, we noticed the vmxnet3 problem this week. We only have around 600-700Mbit/s routed throughput with opnsense installed in a VMware VM with vmxnet3.

Is there a workaround for this issue?

that’s an issue with your esxi setup, not opnsense - I’m saturating a 1gbit link just fine running under esxi7

what is the cpu in your host? there’s plenty of bits in 21.7 that are single threaded - hence you need a cpu with decent single core speed, throwing more cores at the vm won’t help

also you need to lock the opnsense image in ram

obviously what NIC you’re using in your host also matters….generally if you want decent performance you want intel NICs in the host

47
22.1 Legacy Series / Re: TCP BBR congestion control in OPNsense with FreeBSD 13
« on: January 01, 2022, 07:17:49 am »
Quote from: bolmsted on December 31, 2021, 08:19:27 pm
Since the incoming traffic from the internal network tends to flood the GPON ONT upstream on the upstream fibre network and then they don't get the full upload speed.   They are using BBR in order to limit the traffic inbound from their network to allow them get the full upload speeds.

whilst I have seen this on my own 900/500 fibre connection, it's easily fixed using the shaper ( without shaper enabled on upload I max out at 350, with it on I get the full 500 )




48
21.7 Legacy Series / Re: Speeds getting slower when I open the traffic dashboard
« on: December 17, 2021, 01:46:04 am »
Quote from: bunchofreeds on December 17, 2021, 01:42:58 am
I would agree in as much as its is more likely Proxmox - Virtio - Freebsd related rather than specifically OPNsense.

got a spare box you can spin esxi up on - it's one of those site specific things where the only way to properly test is with the opposing hypervisor in the same environment

49
21.7 Legacy Series / Re: Speeds getting slower when I open the traffic dashboard
« on: December 17, 2021, 01:14:18 am »
Quote from: bunchofreeds on December 16, 2021, 10:11:51 pm
Other users with virtual OPNsense on proxmox confirmed it worked OK for them. I don't think I confirmed if they were passing through their adapters though.

as I said above NOT an issue on esxi - either using vmxnet3 or pci pass thru - sounds like a proxmox issue more than an opnsense issue

50
21.7 Legacy Series / Re: Speeds getting slower when I open the traffic dashboard
« on: December 16, 2021, 07:43:17 pm »
Quote from: Poli on December 16, 2021, 06:46:54 pm
Iftop consumes a lot of CPU but far enough to make the machine slower, so I can't see where is the bottleneck at this moment.

I think that fact that iftop is consuming a lot of cpu is another symptom of your issue - running virtualised under esxi here and monitoring both wan and lan interfaces with two instances of iftop I'm seeing 3% cpu load per instance whilst testing the wan speed ( on a 1gig fibre connection - zero drop in thruput observed with both instances of iftop running)

what are the underlying physical NICs you're using? Sounds to me like there's issues with either proxmox or the NICs

51
21.7 Legacy Series / Re: I cant work out how everyone else managed to get mtu of 1500 working on pppoe
« on: November 28, 2021, 09:30:43 am »
Quote from: allebone on November 26, 2021, 05:06:45 am
I have a normal RJ45 connection going from the wan port of the firewall to the switch which the switch vlan tags and then an SFP module in a different port on the same switch which is also on that same vlan as the connection is fiber and I cant plug that directly into the firewall as there are no ports to accommodate it..

that should work fine - I’m running vlan trunking between two 10gbe switches to get the ISP feed from the GPON ONT in my garage thru to where the firewall actually sits.

You have checked you’ve got jumbo frames enabled on the switch if the switch has an option for it ( though a lot of switxhes nowdays just accept anything up to 9K frames and don’t have an option to enable/disable jumbo support )

52
21.7 Legacy Series / Re: I cant work out how everyone else managed to get mtu of 1500 working on pppoe
« on: November 25, 2021, 09:21:48 pm »
switch shouldn't matter as it's on the LAN side - I'm presuming you have a direct connection from your opnsense WAN nic to whatever terminates your ISP connection ( ONT / docsis modem etc )?

have you got a MTU of 1508 to work with this ISP and any other equipment previously? It could be that they're rejecting the larger MTU (PPP-max-payload) during pppoe mtu negotiation

53
Hardware and Performance / Re: J3445m vs Ryzen 2600
« on: November 25, 2021, 07:07:27 pm »
you're going from a system with a single thread rating of 2250 ( and 6c/12t ) to one with a single thread rating of 800 (4c) - so quite a drop in capacity.

However if you're not running suricata/zenarmor it may well be fine - especially if you add a couple of decent NICs in it ( the onboard lan on the j3445m is realtek - not ideal ). If you're wanting ids and zenarmor - I really doubt it'll keep up though.

In the end the only answer will be to just try it and see how it performs on your current connection

plan B would be run esxi on the ryzen 2600 and virtualise opnsense ( presumiung you're running bare metal at the mo) at least it then gives you the chance to use some of the cpu capacity for other things

54
21.7 Legacy Series / Re: I cant work out how everyone else managed to get mtu of 1500 working on pppoe
« on: November 25, 2021, 05:27:44 pm »
Quote from: joeyboon on November 25, 2021, 08:55:02 am
Don't know if it helps you but my PPPoE connection only started working when I applied mss clamping.

So setting my MTU on the physical interface to 1508, so the PPPoE tunnel gets an MTU of 1500 (according to RFC 4638) and applying MSS clamping 1448 made everything work great. Dropped CPU load as well.

MSS of 1448 implies a MTU of 1488 - so you’ve basically overriden your MTU and actually gone even smaller than the default PPPoE mtu of 1492

55
21.7 Legacy Series / Re: I cant work out how everyone else managed to get mtu of 1500 working on pppoe
« on: November 25, 2021, 08:29:23 am »
Quote from: allebone on November 24, 2021, 08:14:33 pm
I see... I did set the parent interface to 1508 but I had not considered/didnt know the hardware would make a difference. I am using a protectli box so the nics are intel gigabit nics. If that doesnt support this then that would make sense I cant get it to work.

Intel NICs definitely can do mini jumbo frames - before I virtualised I ran bare metal and used mini jumbos on an i210, worked fine

56
21.7 Legacy Series / Re: I cant work out how everyone else managed to get mtu of 1500 working on pppoe
« on: November 24, 2021, 06:45:39 pm »
Quote from: allebone on November 23, 2021, 09:47:33 pm
pppoe0: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 0 mtu 1492

I have checked and rebooted multiple times and for sure edited the MTU on the dummy interface and the pppoe connection in addition. There must be something different but I cant work it out.

what's the MTU shown on the parent ethernet device the pppoe device is attached to - and what hardware is that interface running on? It is possible the actual ethernet device won't allow mtu >1500

57
21.7 Legacy Series / Re: I cant work out how everyone else managed to get mtu of 1500 working on pppoe
« on: November 23, 2021, 06:42:18 pm »
Quote from: allebone on November 23, 2021, 02:10:18 pm
I had a pppoe connection on top of em0 interface working on MTU of 1492 and started by setting the MTU to 1508 there which then states "calculated MTU 1500"

that really is all that is needed - here's what ifconfig shows for me - it works


vmx1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1508
        options=800028<VLAN_MTU,JUMBO_MTU>
        ether 00:0c:29:44:42:53
        inet6 fe80::20c:29ff:fe44:4253%vmx1 prefixlen 64 scopeid 0x2
        media: Ethernet autoselect
        status: active
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
...
pppoe0: flags=88d1<UP,POINTOPOINT,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 0 mtu 1500
        inet xxx.xxx.xxx.xx --> xxx.xxx.xx.x netmask 0xffffffff
        inet6 xxxx::xxx:xxxx:xxxx:xxxx%pppoe0 prefixlen 64 scopeid 0x7
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>


and from window I can confirm the mtu really is 1500

ping -f -l 1472 www.google.com

Pinging www.google.com [142.250.71.68] with 1472 bytes of data:
Reply from 142.250.71.68: bytes=68 (sent 1472) time=42ms TTL=117
Reply from 142.250.71.68: bytes=68 (sent 1472) time=41ms TTL=117



58
21.7 Legacy Series / Re: Weird CPU useage
« on: November 10, 2021, 08:32:29 pm »
Quote from: AdSchellevis on November 10, 2021, 06:04:05 pm
If you think you have more experience about why this isn't an i2c issue, please feel free to fix the driver so we can all conclude that the timeout happens for no reason (which I obviously don't expect, with all the time I have spend on this in the last days debugging it).

you're saying it an i2c issue to talking to the sfp+ modules - the x540 has an integrated phy - so if the problem exhibits on both platforms you're not looking at an i2c/sfp+ bug

on rhel8 - if you do an ethtool --module-info on an x520 with a populated sfp+ the result is basically instantaneous. If you do it with no sfp+ installed you get a pause followed by an eeprom i/o error - and if you to the same to an x540 ( with it's integrated phy) you get the same pause and same error

on hardenedBSD you get a pause with with ifconfig -v to an x540-t2 - and no different output with/without the -v option - it's like the hBSD ix driver is always querying i2c (and hitting a timeout) even when it's not appropriate to that model card




59
21.7 Legacy Series / Re: Weird CPU useage
« on: November 10, 2021, 05:19:41 pm »
Quote from: AdSchellevis on November 09, 2021, 09:05:20 am
Installed an x520 card over here with an ixgbe driver, the issue is reproducible and definitely related to the driver.

On my end ifconfig -v operates normally as soon as I insert modules into the empty slots, which seems to point to some missing detection when trying to read the i2c bus (which is only possible when there is a module inserted).

Can you try to install sfp+ modules (or cables) and check if the issue is gone when all slots are occupied?

the issue with 'slow' output from ifconfig -m -v is still present with the x540-t2 ( with both ports connected and good link ) - so it's NOT a sfp+/i2c issue - it's a driver issue

60
21.7 Legacy Series / Re: Weird CPU useage
« on: November 10, 2021, 05:17:39 pm »
Quote from: AdSchellevis on November 10, 2021, 10:43:09 am
For future setups if possible I would prefer an Intel x700 series card (ixl) as these have been proven to be stable in our experience.

given I can buy x520-da2 or x540-t2 for about us$70, wheras an x710-t2 is about us$600 I don't think this is a viable 'fix'

anyone running the 22.x beta able to confirm if the issue is present on freebsd13? ( update - just saw your comment on github that it is indeed better on 22/freebsd13 - sounds like that is a the proper 'fix' )

Pages: 1 2 3 [4] 5
OPNsense is an OSS project © Deciso B.V. 2015 - 2024 All rights reserved
  • SMF 2.0.19 | SMF © 2021, Simple Machines
    Privacy Policy
    | XHTML | RSS | WAP2