Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - andbaum

#1
19.1 Legacy Series / Re: VMWare Tools for Hardened BSD
February 08, 2019, 10:55:18 AM
Thanks, I checked it. Tools seem to work instead of warning.
#2
Quote from: Ricardo on February 07, 2019, 07:18:25 PM
unless you switched it off explicitely
In this report, NAT was enabled any my client (10.0.0.10 was masqueraded to the server at 172.20.1.1). I think the use of UDP rather than TCP hits the better bandwidth results.
#3
You are right.
But in my first attempt I just wanted to see if the NICs are really capable of 1GBit/s, or if there is any issue like the RPi USB-bootleneck.
After seeing this result I built up a little setup with a CubieTruck (had one lying around) on the WAN side (actually my DMZ) and my MacBook on the LAN side.
The CubieTruck brings about 700-750 MBit (same switched LAN, TCP performance in iperf). Without IDS (NAT and about 30 FW Rules enabled), my APU board handled about 500 to 550 MBit to the Cubietruck.
As my internet connection only offers 100Mbit downstream, I could play with some IDS rules. Now I "tuned" my setting to about 200-250 MBit throughput. I have about 20000 "simple" IDS rules. With simple, I mean blacklisted IPs, URLs, etc. Enabling more complex rules (really looking into the traffic behavior), it's no problem to bring the performance down to about 50 MBit with one ruleset.

Hope this brings some clarity.
#4
Hello together,

in my lab, I have a OPNSense 19.1 installation on an ESXi server.
After I installed the VMware-tools plugin, ESXi complains, that the configured guest os (FreeBSD (64-Bit)) doesn't match the running guest os (FreeBSD 11.2-RELEASE-p8-HBSD).

Any advice?

Andreas
#5
Shame on me...  ::)
I solved it. 483 days uptime on the switch -> after a reboot of the switch, the FW doesn't see any local to local packets any more...

Yours, Andreas
#6
Thanks for your comment.

Actually I was able to get rid of the log entries as I set the state tracking for my "LAN to LAN allow any rule" to none.

But I still wonder, why my firewall (= gateway with 10.0.0.1) sees switched (Netgear ProSafe) traffic between internal LAN devices?
(The 10.0.0.10 server was only in this example, I randomly see other internal IPs being blocked to each other).
#7
Really sad - update to 19.1 (BTW: cool product 8)) didn't fix it?
Does anyone know a workaround how to bring IPv6 http traffic transparent over the OPNsense squid?

Yours,

Andreas
#8
In my firewall logs, I often see blocked packets going from an internal LAN device to another internal LAN device.
My questions:
1) Why does OPNsense see those packets? They should be switched and never meet the firewall?!?
2) I wrote a "SRC: LAN_NET DST: LAN_NET allow any" rule, but I didn't change the logging behavior.

Within the LAN everything seems to work.

Can you give me some feedback?

Yours,

Andreas
#9
No one out there having a transparent proxy with IPv6 enabled?  :-\
#10
I'm trying to implement a transparent squid proxy with OPNSense. In IPv4 everything works, but the IPv6 way doesn't do anything. The settings seem correct to me.

My guess: Squid is IPv6 capable

cat /var/log/squid/cache.log
[...]
2019/01/22 10:00:54 kid1| Accepting NAT intercepted HTTP Socket connections at local=[::1]:3128 remote=[::] FD 14 flags=41

but IPv6 NAT redirect is not implemented in BSD (so in OPNSense).
I can create an IPv6 rule under "Firewall: NAT: Port Forward" but it seems to be ignored by the system. Is this correct?

Yours, Andreas
#11
THANKS! That was the problem!

Checking a LAN Client on the other side of the APU board:

$ iperf -c 172.20.1.1 -p 4712 -u -t 60 -i 10 -b 1000M
------------------------------------------------------------
Client connecting to 172.20.1.1, UDP port 4712
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.10 port 63781 connected with 172.20.1.1 port 4712
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 10.0-20.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 20.0-30.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 30.0-40.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 40.0-50.0 sec  1.22 GBytes  1.05 Gbits/sec
[  4] 50.0-60.0 sec  1.22 GBytes  1.05 Gbits/sec

;D

Even with some IDS rules enabled  8)
#12
I have an APU2 board with OPNsense as well. My board only achieves about 120 MBit/s per NIC in iPerf  >:(
I posted the problem here: https://forum.opnsense.org/index.php?topic=11228.0
#13
Thanks for your hint. I already know the other performance related threads but there was no real suggestion that could solve my problem.

The most confusing thing: In most of the treads I found, users comply about just hitting 500-600 MBit/s on the APU Boards with a BSD System. My APU Board just hits about 150 MBit per NIC :(
#14
Hi together,

I'm new to Opnsense and use it for about 2 months now. It is a very cool product and I really enjoy using it.

Actually I have a problem, I couldn't solve myself: I use opnsense on a APU2c4 (Intel NICs). Opnsense is installed on a sd card. When I iperf to my firewall (from a macbook pro with thunderbolt ethernet) I only get about 110-120 Mbit/s bitrate.
I already tried two things:

  • Changing "Interfaces: Settings" in several ways: When I enable HW Support I get a slight improvement of performance, but it's only about 5-10 Mbit/s
  • Connecting the APU Board with a LACP LAG to my switch (Netgear ProSafe GS724T) (I had one interface free for the LAG): In fact, enabling the LACP LAG really doubles my performance. But starting from 110-120 Mbit/s, I'm now at about 250 Mbit/s

Can anyone help me?

Yours, Andreas