Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - klamath

#46
20.7 Legacy Series / Re: IDS + Haproxy + SSL decrypt
January 26, 2021, 08:40:13 PM
This is disappointing, I get the issues with inspection around SSL and decrypting the traffic.  Is there any plans to getting a system in place to make SSL inspection on opnsense work in the future?  The more im digging into IDS/IPS is a non-starter on opnsense in the current state without fronting a CA cert or using unencrypted traffic on the backend.

#47
20.7 Legacy Series / IDS + Haproxy + SSL decrypt
January 25, 2021, 04:49:22 PM
Howdy,

I just got finished up with converting the majority of my portforwards to haproxy terminated endpoints.  The SSL termination + re-encryption is taking place on my opnsense firewall.  I have IDS monitoring my external WAN connections, I was wondering if there is anything else i need to get setup to have IDS inspect the "in the clear" data while it is transversing the firewall?

Thanks
#48
Howdy,

I have migrated away from my ASA to a new supermicro E300-9D-8CN8TP running Opnsense.  I have been loving the product so far, however i have been chasing performance issues around single stream connections and IDS. 

Layout:
Supermicro E300-9D-8CN8TP with 32 GB of RAM
Two ISP connections 1GB/50, 200/10 setup with Active/Active (terminating connections into ixl0-1)
One access port for INSIDE (ixl2)
One Trunk port for DMZ and Openstack VLANs (ixl3)
Opnsense 20.7.5 (running 20.7.4-next kernel)

When i enable IDS/IPS my single stream performance drops to 300mbps, I have IDP/IPS enabled on both WAN circuits and not any inside or trunked port.  Promisc mode is disabled.  I have tweaked the amount of RAM IDS/IPS can consume for both stream/defrag and host, I can manage to get around 500mbps now, however im still no where near the 800mbps I can pull from speedtest.net with multiconnections enabled.  I have done some CPU pinning on suricata as outlined here:

threading:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 2-3 ]  # include only these CPUs in affinity settings
    - receive-cpu-set:
        cpu: [ 4-5 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ 6-15 ]
        mode: "exclusive"

This has helped move some processes away from CPU0, but doing a single stream TCP session im still limited to under 600mbps speeds, suricata seems to be in sea-saw mode as the connection flutters between 300-500mbps on long running streams [1].  I don't think I am CPU bound as when im running a long tcp session I am monitoring host performance with top -P and I dont see any core hitting 100% utilization. 

Any help would be appreciated as this is my last issue i need to finish up to call this migration complete. 

Tim

[1]
Reverse mode, remote host mx2.eth0.com is sending
[  5] local 192.168.99.5 port 33098 connected to 144.202.48.166 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  22.7 MBytes   190 Mbits/sec
[  5]   1.00-2.00   sec  53.1 MBytes   446 Mbits/sec
[  5]   2.00-3.00   sec  54.8 MBytes   460 Mbits/sec
[  5]   3.00-4.00   sec  54.4 MBytes   457 Mbits/sec
[  5]   4.00-5.00   sec  41.1 MBytes   345 Mbits/sec
[  5]   5.00-6.00   sec  43.8 MBytes   368 Mbits/sec
[  5]   6.00-7.00   sec  47.2 MBytes   396 Mbits/sec
[  5]   7.00-8.00   sec  49.6 MBytes   416 Mbits/sec
[  5]   8.00-9.00   sec  52.6 MBytes   441 Mbits/sec
[  5]   9.00-10.00  sec  56.3 MBytes   473 Mbits/sec
[  5]  10.00-11.00  sec  52.5 MBytes   441 Mbits/sec
[  5]  11.00-12.00  sec  54.1 MBytes   454 Mbits/sec
[  5]  12.00-13.00  sec  53.5 MBytes   449 Mbits/sec
[  5]  13.00-14.00  sec  55.1 MBytes   462 Mbits/sec
[  5]  14.00-15.00  sec  50.2 MBytes   421 Mbits/sec
[  5]  15.00-16.00  sec  40.4 MBytes   339 Mbits/sec
[  5]  16.00-17.00  sec  43.9 MBytes   368 Mbits/sec
[  5]  17.00-18.00  sec  35.7 MBytes   300 Mbits/sec
[  5]  18.00-19.00  sec  32.6 MBytes   274 Mbits/sec
[  5]  19.00-20.00  sec  19.4 MBytes   162 Mbits/sec
[  5]  20.00-21.00  sec  25.6 MBytes   214 Mbits/sec
[  5]  21.00-22.00  sec  28.4 MBytes   238 Mbits/sec
[  5]  22.00-23.00  sec  29.0 MBytes   243 Mbits/sec
[  5]  23.00-24.00  sec  29.7 MBytes   249 Mbits/sec
[  5]  24.00-25.00  sec  29.8 MBytes   250 Mbits/sec
[  5]  25.00-26.00  sec  30.5 MBytes   256 Mbits/sec
[  5]  26.00-27.00  sec  30.1 MBytes   252 Mbits/sec
[  5]  27.00-28.00  sec  30.0 MBytes   251 Mbits/sec
[  5]  28.00-29.00  sec  30.3 MBytes   254 Mbits/sec
[  5]  29.00-30.00  sec  33.6 MBytes   282 Mbits/sec
#49
I am using the ixl drivers with IDS enabled, if I disabled promiscuous mode in IDS i could get full speed again.