Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - sparticle

#76
A bit more information.

If we disable suricata then we get upto approx. 730Mb/sec

We have tried the VMXNET drivers also.

CPU and Memory for the VM are low even with Suricata switched on.

Really need some help with this please.

Cheers
Spart
#77
We have today migrated our OPNSense router to a VMware ESXI 6.7 VM.

Install went well despite config import losing all PPPoE settings.

We had to reinstall suricata and a few other things.

It was up and running pretty quick.

However, the network performance is dreadful.

When creating the VM the closest option we could find was Other FREEBSD12 or later 64 bit

The vnic options were e1000e or VMXNET3

I had read somewhere that e1000e was the right choice so that is what we chose.

iperf3 run shows this:


Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  38.1 MBytes   320 Mbits/sec   57    624 KBytes       
[  5]   1.00-2.00   sec  35.0 MBytes   294 Mbits/sec    0    697 KBytes       
[  5]   2.00-3.00   sec  35.0 MBytes   294 Mbits/sec    0    751 KBytes       
[  5]   3.00-4.00   sec  33.8 MBytes   283 Mbits/sec    2    571 KBytes       
[  5]   4.00-5.00   sec  35.0 MBytes   294 Mbits/sec    0    611 KBytes       
[  5]   5.00-6.00   sec  32.5 MBytes   273 Mbits/sec    0    652 KBytes       
[  5]   6.00-7.00   sec  33.8 MBytes   283 Mbits/sec    0    690 KBytes       
[  5]   7.00-8.00   sec  33.8 MBytes   283 Mbits/sec    0    727 KBytes       
[  5]   8.00-9.00   sec  36.2 MBytes   304 Mbits/sec    1    540 KBytes       
[  5]   9.00-10.00  sec  33.8 MBytes   283 Mbits/sec    0    618 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   347 MBytes   291 Mbits/sec   60             sender
[  5]   0.00-10.02  sec   345 MBytes   288 Mbits/sec                  receiver
CPU Utilization: local/sender 2.0% (0.2%u/1.7%s), remote/receiver 40.5% (11.3%u/29.2%s)
snd_tcp_congestion cubic
rcv_tcp_congestion newreno


Any other VM on the esxi host run pretty much at the full GB of the vswitch uplinks.

example from the lan server.


Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   111 MBytes   933 Mbits/sec                 
[  5]   1.00-2.00   sec   112 MBytes   940 Mbits/sec                 
[  5]   2.00-3.00   sec   112 MBytes   940 Mbits/sec                 
[  5]   3.00-4.00   sec   112 MBytes   941 Mbits/sec                 
[  5]   4.00-5.00   sec   112 MBytes   941 Mbits/sec                 
[  5]   5.00-6.00   sec   112 MBytes   941 Mbits/sec                 
[  5]   6.00-7.00   sec   112 MBytes   941 Mbits/sec                 
[  5]   7.00-8.00   sec   112 MBytes   941 Mbits/sec                 
[  5]   8.00-9.00   sec   112 MBytes   941 Mbits/sec                 
[  5]   9.00-10.00  sec   112 MBytes   941 Mbits/sec                 
[  5]  10.00-10.00  sec   334 KBytes   900 Mbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate
[  5] (sender statistics not available)
[  5]   0.00-10.00  sec  1.09 GBytes   940 Mbits/sec                  receiver
rcv_tcp_congestion cubic
iperf 3.9


Can anyone assist with better settings or config changes please.

Cheers
Spart
#78
22.7 Legacy Series / Re: DVD ISO installer is corrupt
October 17, 2022, 03:28:18 PM
Yes, plenty of space and permissions are fine.

This message was from Ubuntu 22.04 desktop right click extract here.

We checked the archive with bzip2 -tv and it checked OK.

Extracted fine with bzip2 -dv

Not sure what the issue is.

Cheers
Spart

#79
22.7 Legacy Series / Re: DVD ISO installer is corrupt
October 17, 2022, 10:47:29 AM
Quote from: franco on October 17, 2022, 10:34:52 AM
Checksum checks out?


Cheers,
Franco

Yes, checksum is fine for all three files from the different mirrors.

9345057e993cd55dfa5280beefd33f1dc2243681defff3c5f11b84fa2c7910f8

Can you advise please.

Cheers
Spart
#80
22.7 Legacy Series / DVD ISO installer is corrupt
October 17, 2022, 09:52:03 AM
Hello,

Today we are in the process of migrating our standalone OPNsense server to a ESXI VM. However, we have tried 3 different mirror DVD ISO images and they all show this error on trying to uncompress the bz2 image.

Can you advise please.

Cheers
Spart

#81
Hello, I have followed the guides in the documentation and started with a simple bandwidth limiting pipe and rule for a host on the plan that frequently downloads large update many 10's of GB.

I have tried limiting the bandwidth of this host but when checking the live stats in ntopNG is can see it is using double what I have set as the limit.

This is a very simple rule almost a direct copy of the one in the docs.

Both the rule and pipe are activated and I can see them in the status page.

Is there some other service I need to start/restart to have these rules enforced or does it simply not work the way I am thinking.

Any help appreciated.

Cheers
Spart
#82
Talking to myself I know, but useful for me to refer back to and might help others not waste their time trying to optimise this kind of setup.

After much testing and many hours work. Installing re-installing etc. OPNsense on a VirtualBox VM I have concluded it is a waste of time as the BSD overhead makes network performance a joke.

I threw resource at it. 8 Xeon CPU's 16GB of memory in my Dell R710 test machine. I know it would likely make no difference and I was right. I think the issue boils down to crap driver emulation in BSD.

If I build for instance an IPCOP FW or an OpenWRT router on the same VM I get Gb speeds from the Vnics. Likewise with an ubuntu server  etc.  I have tried all combos of the Virtual NICS VirtualBox provides the most performant being VirtIO, no surprise there.


We are not talking a small performance degradation, we are talking about 50 mbits throughput massive variability seen as low as 6Mbits throughput.

Is this one of those things like the 'Emperor has no clothes', everyone knows about it but no one wants to talk about how abysmal it is.

Virtualbox has worked fine for years for us. Performance is fine for everything we have ever done apart from this!

Am I missing something?

Cheers
Spart
#83
21.7 Legacy Series / Re: 20.7.8 UPgrade path - Risks
September 14, 2021, 01:41:42 AM
Yeah not going according to plan. It's been supposedly upgrading for the last 5 hours or so. Just get a little dot to show progress but I think it's lying.

I can easily get to the console as I am trialing the upgrade on a backup of the live router in a VM.

Is there a way I can see what it's actually doing. Top does not show anything useful that I can see.

By now I would have thought it would have finished or crashed or something.

Cheers
Spart
#84
21.7 Legacy Series / Re: 20.7.8 UPgrade path - Risks
September 13, 2021, 03:23:13 PM
@franco

Thanks for the reply. We are running 20.7.8 and it is stable for a while now.

Is the recommendation to simply unlock the upgrade from the UI and follow along or upgrade from the console?

Are there any gotchas that you are aware of with 1 Lan connection and 1 WAN PPPoE VDSL connection using these drivers:

/home/admin # pciconf -lv | grep -A1 -B3 network
bce0@pci0:1:0:0: class=0x020000 card=0x7059103c chip=0x163914e4 rev=0x20 hdr=0x00
    vendor     = 'Broadcom Inc. and subsidiaries'
    device     = 'NetXtreme II BCM5709 Gigabit Ethernet'
    class      = network
    subclass   = ethernet
bce1@pci0:1:0:1: class=0x020000 card=0x7059103c chip=0x163914e4 rev=0x20 hdr=0x00
    vendor     = 'Broadcom Inc. and subsidiaries'
    device     = 'NetXtreme II BCM5709 Gigabit Ethernet'
    class      = network
    subclass   = ethernet


It's a 2 port HP card one side Lan the other Wan.

Cheers
Spart
#85
21.7 Legacy Series / 20.7.8 UPgrade path - Risks
September 13, 2021, 12:27:24 PM
Hello,

We are running a fully updated 20.7.8 system. Is there is guide to upgrading safely from 20.7.8 to the latest 21.7?

Cheers
Spart
#86
Non trivial piece of work as I have other VM's running and have been for many years without issues. But thanks for the suggestion.

I am now pretty certain this is nothing to do with Vbox and everything to do with opnsense/bsd nic drivers.

More testing. I tried a iperf test between other VM's from the same host server I was running my opnsense router to other lan clients. Using the same Pro 1000 MT virtual nic. I get essentially 1gb across the lan from any other VM I am running. Most are Ubuntu servers even fired up some older Ubuntu 16.04 servers and they are all running across the network at a gb.

The VM nic interface provided by Vbox to the guest is based on the Intel 82545EM the FreeBSD em driver has been around since BSD 4.x according to the documentation and intel are recommended as the Nic of choice.

I can see it is loading the em driver at boot and assigns the 2 nics as em0 and em1 but the throughput is terrible.

I know it's not that scientific a test but it is real world. I am getting a tiny fraction of the throughput into/out of the lan interface in opnsense.

Is BSD really that sh*t at networking. We are talking about a massive reduction in throughput via opnsense. Do I need a different driver in opnsense? All this despite the BSD documentation stating that the em driver supports the following.

The driver   supports Transmit/Receive checksum offload and Jumbo Frames on
     all but 82542-based adapters.

     Furthermore it supports TCP segmentation offload (TSO) on all adapters
     but those based on   the 82543, 82544 and 82547 controller chips.  The
     identification LEDs of the   adapters supported by the em driver can   be
     controlled   via the   led(4) API for localization purposes.  For further
     hardware information, see the README included with   the driver.

If I set the Vbox VM to use Virtio drivers then I need the guest (opnsense) to configure itself correctly on boot to use the virtio drivers.

UPDATE:

I set the Lan nic as Virtio and left the WAN nic alone.

The drivers are loaded correctly it seems.

pciconf -lv | grep -A1 -B3 network
    subclass   = VGA
virtio_pci0@pci0:0:3:0: class=0x020000 card=0x00011af4 chip=0x10001af4 rev=0x00 hdr=0x00
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio network device'
    class      = network
    subclass   = ethernet
--
em0@pci0:0:8:0: class=0x020000 card=0x075015ad chip=0x100f8086 rev=0x02 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82545EM Gigabit Ethernet Controller (Copper)'
    class      = network
    subclass   = ethernet


Throughput is a little better up from around 10-20mbs to 80-100mbs but still 10x slower.

Interesting flags on the vnet0 interface.

vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=c00b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,LINKSTATE>


which seems to suggest ti support TSO etc. Currently they are disabled in the config.


Any help appreciated.

Cheers
Spart
#87
Quote from: spikerguy on September 11, 2021, 12:40:13 AM
To use opnsense on rpi you will need a usb to lan adaptor and not all devices are supported in freebsd .


Also performance will not be good with usb adaptor, hence many no one is working on it.

I was under the impression that the USB3 type adaptors were essentially running at Gb speed and the Pi4 native Gb adaptor is not limited by the USB sussystem as on previous generation of pi's.

UPDATE:

just for a bit of fun and mainly because I can. I built a new rpi4 running Ubuntu Server 21.04 and attached a Jcreate USB Adaptor.

Which result is the USB3 Adaptor?

iperf3 -p 5201 -c 192.168.0.18
Connecting to host 192.168.0.18, port 5201
[  4] local 192.168.0.105 port 60016 connected to 192.168.0.18 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   113 MBytes   952 Mbits/sec    0    631 KBytes       
[  4]   1.00-2.00   sec   108 MBytes   904 Mbits/sec    0    631 KBytes       
[  4]   2.00-3.00   sec   108 MBytes   902 Mbits/sec    0    631 KBytes       
[  4]   3.00-4.00   sec   108 MBytes   904 Mbits/sec    0    631 KBytes       
[  4]   4.00-5.00   sec   108 MBytes   905 Mbits/sec    0    631 KBytes       
[  4]   5.00-6.00   sec   108 MBytes   902 Mbits/sec    0    631 KBytes       
[  4]   6.00-7.00   sec   111 MBytes   932 Mbits/sec    0    631 KBytes       
[  4]   7.00-8.00   sec   111 MBytes   932 Mbits/sec    0    631 KBytes       
[  4]   8.00-9.00   sec   111 MBytes   935 Mbits/sec    0    631 KBytes       
[  4]   9.00-10.00  sec   111 MBytes   935 Mbits/sec    0    631 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.07 GBytes   920 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.07 GBytes   917 Mbits/sec                  receiver

iperf Done.
iperf3 -p 5201 -c 192.168.0.20
Connecting to host 192.168.0.20, port 5201
[  4] local 192.168.0.105 port 40788 connected to 192.168.0.20 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   112 MBytes   941 Mbits/sec   13    305 KBytes       
[  4]   1.00-2.00   sec   110 MBytes   925 Mbits/sec    0    378 KBytes       
[  4]   2.00-3.00   sec   110 MBytes   924 Mbits/sec    0    386 KBytes       
[  4]   3.00-4.00   sec   110 MBytes   920 Mbits/sec    0    386 KBytes       
[  4]   4.00-5.00   sec   109 MBytes   916 Mbits/sec    0    386 KBytes       
[  4]   5.00-6.00   sec   110 MBytes   919 Mbits/sec    0    386 KBytes       
[  4]   6.00-7.00   sec   110 MBytes   921 Mbits/sec    0    390 KBytes       
[  4]   7.00-8.00   sec   110 MBytes   919 Mbits/sec    0    390 KBytes       
[  4]   8.00-9.00   sec   110 MBytes   921 Mbits/sec    0    393 KBytes       
[  4]   9.00-10.00  sec   110 MBytes   923 Mbits/sec    0    393 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec   13             sender
[  4]   0.00-10.00  sec  1.07 GBytes   921 Mbits/sec                  receiver

Just needs decent drivers on BSD!

Cheers
Spart
#88
I have struggled for a while troubleshooting what I thought was internet settings. In the end it turned out to be virtualbox network performance.

I tried just about every combination of virtual nic in Vbox and the myriad of settings. Throughput was variable at best. One second 95% of the available ISP bandwidth the next 20% with no idea why.

I am not a networking expert.

The OpnSense setup is a VirtualBox VM running on a Dell R710 24 cores and 128GB of memory. 1 x 4 port Intel GB Nic.

2 of the nic ports are dedicated to the VM and bridge networking. Tried virtio but could not get it to run.

The fastest Vbox emulated nic was Intel Pro1000 MT Server. The rest were variously slower. The VM has 16Gb of memory and 4 cores.

In the end a ridiculously modest old Dell Optiplex Intel(R) Core(TM)2 CPU 6400 @ 2.13GHz (2 cores) with a pcie x 1 HP 2 Port GB Nic massively outperformed the VM. Full maximum ISP bandwidth all of the time. I use a dedicated Rpi4 to monitor the internet connection 24x7 running this awesome little build https://github.com/geerlingguy/internet-monitoring

From a graph that looked like a bad picture of the mountains with massive variability in the bandwidth my network was seeing. It is now essentially a solid block running at the max profile (40Mb Down 10 Up) BT provide us with a tiny amount  of variability of a few tens of kbs.

The reason I moved to a VM was to cut down on the number of machines I was running.

If someone knows the trick to getting full performance or nearly full performance out of a Vbox virtual nic then please post it as I would love to virtualise opnsense again.

Iperf between the lan side of opnsense and my desktop was c. 80mb using the virtual nic. using the Physical dell optiplex it's essentially 1Gb running at 980ish mb. So sometimes much worse than a 10th of the potential.

Any advice appreciated.

Cheers
Sspart :(
#89
I had to remove the redis db and then reset the service. After that redis started then I could start ntopng.

cd /var/db/redis
rm redis.rdb

Restart redis service from gui
Restart ntopng service from gui

Seems to be up and stable.

Hope this helps someone else.

Cheers
Spart
#90
I don't recall. Whatever it was late September when it was last updated.

Cheers
Spart