Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - marcosscriven

#1
Just bumping this in the hope someone can help here please? I'm essentially trying to forward traffic from one host to another, while preserving the source IP.
#2
Thanks - I just tried that, and all it seems to have done is not create a firewall rule.

When I look at Wireshark on the target, I see traffic coming from the original source, but the destination is the server targeted by the NAT rule, and not the original target.

The documentation at https://docs.mitmproxy.org/stable/concepts-modes/#transparent-proxy specifically warns about this:

QuoteThis distinction is important: when the packet arrives at the mitmproxy machine, it must still be addressed to the target server. This means that Network Address Translation should not be applied before the traffic reaches mitmproxy, since this would remove the target information, leaving mitmproxy unable to determine the real destination.

Any ideas here please?

I somehow need to send traffic from one host to another on the same subnet, but keep the original destination (something called "masquerade" I think, in Linux parlance).
#3
I searched the forum for this, and the closest I could find was this: https://forum.opnsense.org/index.php?topic=2063.0

I'm just trying to setup redirection to a transparent proxy, following the instructions here for OpenBSD (as this is what opnsense is under the hood) https://docs.mitmproxy.org/stable/howto-transparent/#openbsd

It says to add this to /etc/pf.conf:

mitm_if = "re2"
pass in quick proto tcp from $mitm_if to port { 80, 443 } divert-to 127.0.0.1 port 8080


This assumes running on a local machine, but on my opnsense router I'd have to tell it another IP address on the vlan.

However, it's not clear to me how to achieve the same in opnsense. Originally I simply tried port forwarding, before realising that wouldn't work.

Any ideas here please? If it's not possible in the GUI, how do I do it manually?
#4
Further to my previous post, I actually fixed this just by turning on all the hardware acceleration options in "Interface -> Settings"

That includes CRC, TSO, and LRO. I removed the 'disabled' check and rebooted.

Now get rock solid iperf3 result:


[  5] 166.00-167.00 sec   112 MBytes   941 Mbits/sec
[  5] 167.00-168.00 sec   112 MBytes   941 Mbits/sec
[  5] 168.00-169.00 sec   112 MBytes   941 Mbits/sec
[  5] 169.00-170.00 sec   112 MBytes   941 Mbits/sec
[\code]

And NIC processing load dropped to just 25% or so:

[code]
  PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
   11 root        155 ki31     0B    32K RUN      1   3:14  77.39% [idle{idle: cpu1}]
   11 root        155 ki31     0B    32K RUN      0   3:06  71.26% [idle{idle: cpu0}]
   12 root        -92    -     0B   400K WAIT     0   0:55  28.35% [intr{irq29: virtio_pci1}]
91430 root          4    0    17M  6008K RUN      0   0:43  21.94% iperf3 -s


What confused me was:

1) The acceleration is disabled by default (not sure why?)
2) I thought it would apply to virtio devices, but clearly they're implementing the right things to support it.

EDIT

Arghh - perhaps not. While this fixed the LAN side, suddenly the WAN side throughput plummets.

This is strange because it's using the same virtio to a separate NIC of exactly the same type.
#5
EDIT - Resolved - see next post

Original post:

Quote from: iamperson347 on December 05, 2021, 07:48:25 PM
I'm chiming in to say I have seen similar issues. Running on proxmox, I can only route about 600 mbps in opnsense using virtio/vtnet. A related kernel process in opnsense shows 100% cpu usage and the underlying vhost process on the proxmox host is pegged as well.

I'm seeing throughput all over the place on a similar setup (I.e in a Proxmox VM)


[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  97.0 MBytes   814 Mbits/sec
[  5]   1.00-2.00   sec   109 MBytes   911 Mbits/sec
[  5]   2.00-3.00   sec   111 MBytes   934 Mbits/sec
[  5]   3.00-4.00   sec   103 MBytes   867 Mbits/sec
[  5]   4.00-5.00   sec   100 MBytes   843 Mbits/sec
[  5]   5.00-6.00   sec   112 MBytes   937 Mbits/sec
[  5]   6.00-7.00   sec   109 MBytes   911 Mbits/sec
[  5]   7.00-8.00   sec  75.7 MBytes   635 Mbits/sec
[  5]   8.00-9.00   sec  68.9 MBytes   578 Mbits/sec
[  5]   9.00-10.00  sec  96.6 MBytes   810 Mbits/sec
[  5]  10.00-11.00  sec   112 MBytes   936 Mbits/sec


And while that's happening, I see the virtio_pci process maxing out:


  PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
   12 root        -92    -     0B   400K CPU0     0  21:42  94.37% [intr{irq29: virtio_pci1}]
51666 root          4    0    17M  6600K RUN      1   0:18  68.65% iperf3 -s
   11 root        155 ki31     0B    32K RUN      1  20.4H  13.40% [idle{idle: cpu1}]
   11 root        155 ki31     0B    32K RUN      0  20.5H   3.61% [idle{idle: cpu0}]


Are there any settings that could help with this please?

I'm on 22.1.6