Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Layer8

#1
I cant contribue a solution, but I think my problem fits in here.

We are using Keepass2Android on our mobile phones. We noticed some time ago, that its not longer possible to access the keepass databate which is located on a nextcloud server, which is behind a nginx on a opnsense.

I just found out, that the problem was the Bot Protection of the nginx. I disabled it and now we can access the nextcloud server with Keepass2Android again.

The strange thing was, that it was possible to access nextcloud with the nextcloud android app and other webdav clients all the time.

Hope this info will help some people who are looking for a solution.

I am also interested in a solution to enable the Bot protection again.

Edit: Keepass2Android has thrown this error message: protocol=h2, code=403
#2
24.1, 24.4 Legacy Series / Re: No console menu
February 26, 2024, 07:13:52 PM
Makes sense. Thanks.

Just a little improvement... Could it be possible to activate the console permanently over rs232 and usb in default settings?
#3
24.1, 24.4 Legacy Series / Re: No console menu
February 26, 2024, 06:15:30 PM
Quote from: Patrick M. Hausen on February 25, 2024, 11:55:51 PM
Is System > Settings > Administration ... all the Console settings really configured as it should be?

Because on a system without any VGA and serial console only the FreeBSD kernel will default to serial output. But once the booted OS and the services take over you still need an explicit configuration to activate e.g. login or in the case of OPNsense the menu on the serial console ...

I thought so, because it looks like default (which is working on all other installations):

Console driver    checked
Primary Console    Serial Console
Secondary Console None
Serial Speed    115200
USB-based serial    checked
Console menu    checked

Quote from: doktornotor on February 26, 2024, 04:59:31 PM
System - Settings - Administration - Console: do NOT check the "USB-based serial" checkbox.

Disabling the USB-based serial solved the issue. Thanks!

I am just wondering why it worked after I reinstalled opnsense and stopped working now.

Could the issue be, because I am using an Android mobile phone over USB for WAN connectivity?
#4
24.1, 24.4 Legacy Series / Re: No console menu
February 25, 2024, 11:43:03 PM
I dont think so.

I can see the entire boot process over the rs232 console connection in Terra Term. I can see the BIOS intialisation, the opnsense boot loader, how configs get loaded.

The only thing whats missing is the bold part:

QuoteHTTPS: SHA256 37 BF 97 F4 FD 3B D9 5D 88 03 E2 9A 15 E5 26 B9
               
SSH:   SHA256 key (ECDSA)
SSH:   SHA256 key (ED25519)
SSH:   SHA256 key (RSA)

  0) Logout                              7) Ping host
  1) Assign interfaces                   8) Shell
  2) Set interface IP address            9) pfTop
  3) Reset the root password            10) Firewall log
  4) Reset to factory defaults          11) Reload all services
  5) Power off system                   12) Update from console
  6) Reboot system                      13) Restore a backup

Enter an option:

But the curser is blinking a line under  "SSH:   SHA256 key (RSA)"

A blinking curser means, that the rs232 connection is alive.
#5
24.1, 24.4 Legacy Series / Re: No console menu
February 25, 2024, 06:59:44 PM
I cant see the menu over the rs232-console and there is no prompt at all. So I cant type anything over rs232. Last output/lines are the SHA256keys. Below this keys, I can see a blinking cursor, but I cant enter anything.

Console output over VGA is not available, because the APU 1D4 does not have a video output.

When I login over SSH, I can see the menu. I tried your command over SSH but this did not solve the problem.
#6
24.1, 24.4 Legacy Series / No console menu
February 25, 2024, 05:23:00 PM
Hi,

have the exact same issue with 24.1.2 like described here: https://forum.opnsense.org/index.php?topic=23412.0

Its an APU 1d4 (AMD G-T40E Processorm 4GB Ram and a 32GB SSD)

In my case, I reinstalled with the 24.1 iso because I switched to ZFS based installation some weeks ago. I uploaded the config backup and everything worked fine first, including the console menu.

Today, I noticed that the console menu is not displayed anymore. I have no clue whats the reason for this issue.

How to fix this?

No other issues detected at the moment. Restart does not help.

#7
Same problem here. My WebUI is not responding. Might be also the os-dyndns plugin here.

Reloading the services over the console does not help.

How to fix this issue ?

Edit: Second sentence of post #2 solved the GUI problem for me. I missed that part while reading this topic on my mobile. Sorry.

Uninstalled the plugin and everything is working here again. Thanks.
#8
@schmuessla: Thanks, but what information is helpful in this documentation please? If you mean, thats the answer to the question why everyone is saying that one should deactivate hardware acceleration, then yes, this could be the answer.

@all: It was possible to do a iperf3 from my windows client to the OPNsense VM with a vmxnet3 adapter with over 9Gbit/s in both directions:


Send:
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-255.90 sec   [b]277 GBytes  9.31 Gbits/sec[/b]                  sender
[  4]   0.00-255.90 sec  0.00 Bytes  0.00 bits/sec                  receiver
iperf3: interrupt - the client has terminated

Reverse:
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-86.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  4]   0.00-86.04  sec  [b]92.9 GBytes  9.28 Gbits/sec  [/b]              receiver
iperf3: interrupt - the client has terminated



And I dont know exactly why.

What I have done since yesterday:
- I have added a vSphere 8 Enterprise Plus licence key (it was a free ESXi licence before)
- Through adding the licence key, it was possible to active SR-IOV. I activated it on the Intel x550-T2 adapter
- I added a SR-IOV device to the VM
- I assigned the new SR-IOV-based ixv0 adapter to an interface in opnsense, activated it and locked it agains removal
- I added a IP to the ixv0 interface and tested throughput with iperf, but it was not faster then yesterday (4-5Gbits/s)
- I then tried to add the ixv0 nic to an older, existing interface, but this was not possible, because of some "Interface ue0 does not exist" warnings (one interface is attached to Android USB-Thetering, others are deactivated Wireguards tunnels).
- I then switched on the Wireguard tunnels, to solve the "Interface does not exist" problem to be able to assign interfaces again
- I also deleted the interface which was still attached to ixv0. ixv0 is avaible for new assignement at the moment.

During all this steps, it was not possible to iperf with nearly 10G. Most of the time it was under 5G, one time only 1G.

But now, I can start iperf3 benchmarks with 9-10Gbits/s.


Can someone explain this?


Edit: I do iperf allways to 10.1.1.1/24, which was assigned to vmx0 all the time.






#9
Long story short vom aktuellen Stand:

Der Datendurchsatz verdoppelt sich mal eben, wenn man bei Nutzung der vmxnet3 die Hardwarebeschleunigung der Netzwerkkarte komplett aktiviert.

Vor allem geht mal eben die CPU-Belastung um knapp über die Hälfte zurück!
#10
No. That's something I hadn't thought of before, because everywhere you can read that you should leave hardware acceleration completely deactivated.

After activating the acceleration like shown in you attachement, I see the following result on the matured OPNsense:

~5,3Gbit/s from client to OPNsense, ~5,8Gbit/s in reverse mode, both with only one iperf3 stream,
~7,4Gbit/s from client to OPNsense, ~7Gbit/s in reverse mode, both with two iperf3 streams,
~9Gbit/s from client to OPNsense, ~7,8Gbit/s in reverse mode, both with three iperf3 streams

CPU utliziation with three streams is now: 26% / 25% reverse

This doubled the speed in some scenarios for iperf3 benchmark, which means, OPNsense is on the same level like a plain FreeBSD 13.2 installation.

I will do some test with routed traffic in the next couple of days.



Thanks a lot for this hint!

This leads to the question, why everyone is saying that one should disable hardware acceleration. Whats the reason for this widely spread recommendation and is there any disadvantage now for a VMware scenario?
#11
Ich hab mir den E1000(e)-Test gespart und dafür mal mit FreeBSD getestet:

https://forum.opnsense.org/index.php?topic=36023.0
#12
Hi all,

i see very poor throughput with OPNsense installations running in VMware VM (ESXi8) using vmxnet3 vNICs in fast networks. The following networks are based on a 10G network.

Matured OPNsense:
The reason for this thread is:  I cant get over 2,7Gbit/s with iperf3 between the opnsense and my windows client (both in same vlan with one switch in between) in a single stream connection.  Its not a iperf3 problem, because routed traffic is also not faster than 2,7Gbit/s. The 2,7Gbit limit seems to be a overall limitation.

Other VMs (Linux, Windows)
I can get easily over 9Gbit/s with Linux or Windows based VMs, so its definitly not a hardware, network or hypervisor related limitation.

Fresh plain FreeBSD 13.2
I installed a fresh plain FreeBSD 13.2 with iperf3 with same VM settings like the matured OPNsense  (8 vCPUs, 8GB ram, vmxnet3 adapter), same ESXi-host. The result is:

~5Gbit/s from client to FreeBSD, 6Gbit/s in reverse mode, both with only one iperf3 stream,
~8,5Gbit/s from client to FreeBSD, 7,3Gbit/s in reverse mode, both with two parallel iperf3 streams,
~9,2Gbit/s from client to FreeBSD, 8Gbit/s in reverse mode, both with three parallel iperf3 streams,

With three streams, utilization of the VM is at 42% from client to FreeBSD  and only 18% in reverse mode. Dont know if this unequal utilization is a iperf3 or FreeBSD issue.

Fresh plain OPNsense 23.7.4
After the results with plain FreeBSD 13.2, i installed a fresh plain OPNsense 23.7 (downloaded ISO file today) in a VM with exact same VM settings again. Updated it to  23.7.4 and installed iperf3 plugin (which is based on iperf v3.13). I applied a allow all floating rule to allow incomming connections to the iperf3 daemon.

When i started iperf3 the first time, I have seen this result:
~1Gbit/s from client to OPNsense with only one iperf3 stream

Because iperf3 plugin stops the iperf3 daemon once a test is canceled, I restarted the iperf3 daemon in the opnsense dashboard after every test. The after the first restart was:
~2.7Gbit/s from client to OPNsense with only one iperf3 stream

So, this is the first weird behavoiur in OPNsense. Why was the first test with 1G and the second with 2,7G? I was able to reproduce this after reverting the VM snapshot.

I continued the normal testing after this. Here are all results with firewall and one allow all floating rule enabled:

~2,8Gbit/s from client to OPNsense, ~3,1Gbit/s in reverse mode, both with only one iperf3 stream,
~5.2Gbit/s from client to OPNsense, ~3.9Gbit/s in reverse mode, both with two iperf3 streams,
~7.4Gbit/s from client to OPNsense, ~4.1Gbit/s in reverse mode, both with three iperf3 streams
~9Gbit/s from client to OPNsense, ~5Gbit/s in reverse mode, both with ten iperf3 streams

For comparison reasons, here is the CPU utilization for the test with three streams: 57% / 47%.


To make sure, that this is not an issue with the iperf3 plugin of OPNsense, i also uninstalled the plugin and installed iperf3 over cli using pkg install iperf3 (v3.14, a bit newer like FreeBSD 13.2). I then started iperf3 with: iperf3 -s. The result is:

~7.3Gbit/s from client to OPNsense, ~4,0Gbit/s in reverse mode, both with three iperf3 streams

Because the result is nearly the same like with the iperf3 plugin, I only tested it once with three streams.


I also disabled the firewall function over (Firewall -> Settings -> Advanced -> Disable Firewall. I removed the floating rule to check if FW is disabled. Here are the results:

~3,2Gbit/s from client to OPNsense, ~3,8Gbit/s in reverse mode, both with only one iperf3 stream,
~5.4Gbit/s from client to OPNsense, ~4,5Gbit/s in reverse mode, both with two iperf3 streams,
~7.4Gbit/s from client to OPNsense, ~4,3Gbit/s in reverse mode, both with three iperf3 streams
~9Gbit/s from client to OPNsense, ~5Gbit/s in reverse mode, both with ten iperf3 streams

For comparison reasons, here is the CPU utilization for this test with three streams: 50% / 47%.

With disabled firewall, throughput is a bit higher, but not much. Looks like the answer is not related to pf.


Matured OPNsense 23.7.4

To clarify that its totally different on a matured opnsense, I did a quick test again to show some results here:

~2,7Gbit/s from client to matured OPNsense, ~3,4Gbit/s in reverse mode, both with three iperf3 streams

CPU utilization: 35% / 30%.



Result

After this test series, it really looks like there is a  bottleneck in OPNsense when its installed in VMware with vmxnet3 adapters. I think its not a FreeBSD or driver issue, because throughput with plain FreeBSD is much better than with OPNsense. FreeBSD performance is also far away from perfection compared with Ubuntu and FreeBSD.

OPNsense is using a lot more compute power with less throughput than a plain FreeBSD.

I understand that there is possibly a lot of optimization in network components in a OPNsense, which causes a bigger overhead. But I think a core feature of a good firewall is efficiency and scaleability if the hardware is good enough.

Whats the reason for this issue?
#13
Kann ich gern nochmal zu Hause testen. Aber die E1000/E1000e Adapter sind halt schon wesentlich ineffizienter und benötigen wesentlich mehr Rechenzeit  auf Hypervisorebene im Gegensatz zum vmxnet3.
#14
Zu Hause noch nicht, aber  @ work vor einiger Zeit. Ich meine da war bei 1G schluss (sagt ja auch der Name schon). Sollte da mehr gehen?
#15
Hallo alle,

ist denn bekannt, ob die Performance-Probleme bei Verwendung von VMXNET3-Adaptern in einer VMware VM mittlerweile behoben sind, behoben werden oder durch irgendwelche Tricks behoben werden können?

Ich hab mein Netz zu Hause jetzt durchweg auf 10G umgestellt und schaffe mit der Sense in einer VMware VM einfach nicht mehr als 2,7Gbit/s. Egal ob iperf3 oder gerouteter Traffic. Es spielt auch keine Rolle ob die Sense 2 oder 10 vCPUs hat. Es geht einfach nicht mehr.

Die Hardware die ich im Einsatz habe (Verlegekabel CAT7, guter 10G-Switch, 10G phys. NICs und moderne Ryzen-Systeme) ist durchweg stark genug. Mit anderen VMs, die im gleichen Netz hängen wie meine Workstation, sind 9-10Gbit/s Durchsatz locker drin. Sobald die Sense dazwischen hängt, geht nicht mehr.

Sehr frustrierend.