Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Neo

#1
No... I don't have a quick/easy way to do that... This is a Dell T320 with 16xSAS RAID 10, 4x1G NICs, and plenty of RAM/CPU... The OpnSense VM was given 3x vCPU and 4GB RAM and showed no indication of resource constraint on the dashboard... I have been thinking about toying with ProxMox but don't have that built yet and the hardware would be nothing compared to a T320... The host is not that loaded but it is hosting 4 VMs that are critical so I can't be doing a lot of rebooting the host... Assuming I could prove this was not an issue on a ProxMox box, where would I go from there?

I am wondering if there are others here getting full performance out of OpnSense on a Hyper-VM and would share their configuration... I do have Broadcom NICs and am not using VMQ but that does not seem to hold anything else back other than OpnSense... I have other Linux stuff (not BSD) running without issues (like Ubuntu, Docker with PiHole/Cloudflaird, Home Assistant, etc.) which is why my gut is saying BSD vs HV... but I sure could use some corroboration from elsewhere...
#2
iperf3 outputs (before and after):

***


PS C:\temp\iperf-3.1.3-win64> .\iperf3.exe -c 10.99.1.2 -p 7777 -bidir
Connecting to host 10.99.1.2, port 7777
[  4] local 192.168.1.101 port 49713 connected to 10.99.1.2 port 7777
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  84.6 MBytes   709 Mbits/sec
[  4]   1.00-2.00   sec  93.9 MBytes   786 Mbits/sec
[  4]   2.00-3.00   sec  83.8 MBytes   704 Mbits/sec
[  4]   3.00-4.00   sec  96.9 MBytes   812 Mbits/sec
[  4]   4.00-5.00   sec  87.8 MBytes   737 Mbits/sec
[  4]   5.00-6.00   sec  90.4 MBytes   758 Mbits/sec
[  4]   6.00-7.00   sec  94.1 MBytes   789 Mbits/sec
[  4]   7.00-8.00   sec  88.2 MBytes   741 Mbits/sec
[  4]   8.00-9.00   sec  97.0 MBytes   813 Mbits/sec
[  4]   9.00-10.00  sec  97.4 MBytes   818 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec   914 MBytes   767 Mbits/sec                  sender
[  4]   0.00-10.00  sec   914 MBytes   767 Mbits/sec                  receiver

iperf Done.
PS C:\temp\iperf-3.1.3-win64> .\iperf3.exe -c 10.99.1.2 -p 7777 -bidir
Connecting to host 10.99.1.2, port 7777
[  4] local 10.99.1.106 port 49715 connected to 10.99.1.2 port 7777
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   351 MBytes  2.94 Gbits/sec
[  4]   1.00-2.00   sec   413 MBytes  3.46 Gbits/sec
[  4]   2.00-3.00   sec   377 MBytes  3.16 Gbits/sec
[  4]   3.00-4.00   sec   417 MBytes  3.50 Gbits/sec
[  4]   4.00-5.00   sec   412 MBytes  3.45 Gbits/sec
[  4]   5.00-6.00   sec   425 MBytes  3.56 Gbits/sec
[  4]   6.00-7.00   sec   402 MBytes  3.37 Gbits/sec
[  4]   7.00-8.00   sec   416 MBytes  3.49 Gbits/sec
[  4]   8.00-9.00   sec   430 MBytes  3.61 Gbits/sec
[  4]   9.00-10.00  sec   415 MBytes  3.48 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  3.96 GBytes  3.40 Gbits/sec                  sender
[  4]   0.00-10.00  sec  3.96 GBytes  3.40 Gbits/sec                  receiver



***

... sure feels like an OpnSense/FreeBSD interaction with Hyper-V to me... Don't know if this can be resolved or how to approach tweaking it further... I've already plated with VMQ and RSC... at this point its simply 2 endpoints on 1 or 2 virtual switches with or without OpnSense in between not much traffic on the target server hosting the iperf3 target either...

Are there settings I should look at tweaking within OpnSense and/or BSD at this point? I'm getting a bit beyond my comfort zone there... Are there any guides anywhere on optimal configuration of OpnSense in Hyper-V 2019? I need to know if there is a solution I can chase or if I'm running into a limitation of this combination that will just have me chasing my tail... It's likely not going to work for me unless I can get this figured out...
#3
To narrow this down a bit further I built a plain OpnSense with just LAN and WAN on a fresh VM with WAN connected to my main production vSwitch and LAN connected to an isolated private vSwitch and turned on an iperf3 server on a Windows 2019 VM... I tested the speed using both a Ubuntu (22.04) VM and a Win 10 VM to the Win 2019 server (all connected to same vSwitch so nothing actually touching an external switch in the real world) and then moved both to the isolated vSwitch on the LAN side of the OpnSense and tested against... Results were similar in both cases... I only get about 30% of the expected data transfer when routing through the OpnSense vs having no opnsense in between...



#4
Thank you for your response.

I did see that in the original thread from 22.1 RC and I did try that... it did not seem to make a difference... However, all interfaces are attached to the same vSwitch in my case which is and "external" switch connected to a 3 NIC Team on the host (which plugs into a LAG on the physical switch). The vNIC for the OpnSense LAN interface is native (vLAN 0) but all other interfaces on the OpnSense VM are on tagged vLANs (set a the VM level)... not sure if that makes a difference or not...

I can try disabling RSC again and see if that makes a difference... unfortunately I only have the 3 physical NICS to work with and 3 critical VMs (Server 2019) using that already so room for experimentation is somewhat limited...
#5
Apologies if I've missied an existing solution somewhere... I did do searches on this and found a thread back around the 22.1 RC timeframe that 'might' be related but did not seem to offer a conclusive remediation and might not be the same issue I'm experiencing...

Background: I have been running OpnSense as a VPN ("only") gateway for the past couple years on a single NIC Intel NUC so everything other than the interface assigned to LAN is a tagged vLAN... It has multiple WAN links (1gb, 300mb, LTE) and multiple VPN links (one for each WAN) all handled by an L2 vLAN switch... The performance has been excellent with full gigabit throughput from a physical PC on the LAN to internet hosts with consistent speed test results on 21.1->23.1

I am now building out a Hyper-V VM with a slightly different configuration (1 WAN link, 2 VPN tunnels, LAN + several additional vLANs that will be firewalled and have limited or no access between them)... The WAN and LAN are on separate virtual NICs defined on the VM at the HV level and I have a Win 10 VM with a single vNIC on the LAN side on the same vSwitch as well as a physical PC on the LAN side to test from...

Upload speeds are fine but download speed is about 25-30% of what I would expect...

Details:
* Win 10 VM is on the same virtual "10 gb" switch as all the OpnSense vNICs/vLANs
* vSwitch tied to a physical 3 NIC "team" (LAG) between the host server and L2 vLAN switch in the server rack
* Rack switch has 1gb uplink to main L2 vLAN Switch near ISP router
* ISP router has 1gb connection to main L2 vLAN Switch
* both physical switches have plenty of backplane bandwidth and are not handling excessive traffic

Thoughts:
* no bottleneck on 10gb vSwitch
* no bottleneck on 3gb LAG
* data transfer between PC on main switch and other (Win Server 2019) VMs on the same vSwitch/Physical switch are fast
* some "potential" limitations of 1gb fiber link between switches but should not limit downlaod to 25-30% of normal

My gut says there is something about OpnSense or FreeBSD that isn't working well with my Hyper-V setup as I've done many other things with this host and set of switches (even using multiple vLANs and other virtual router configs) -- I have not done a lot of deep granular tweaking of Hyper-V network settings other than turning off VMQ on the physical NICs (they are Broadcom and turning that off has long been recommend on these NICs) and I'm not very familiar with low level settings on OpnSense or the underlying networking of Hardened BSD...

Hoping someone else has already experienced this and has a fix for me or that this does in fact relate to whatever changed (and caused issues) in 22.1 RC and there is a remedy via tweaks on OpnSense or HV (or both)...

Please advise!
#6
Background: I've setup OpnSense with multiple WAN gateways (dual internet + LTE fail-over) with a VPN tunnel (via public VPN provider) on each WAN link... I have a WAN_Gateway group and a VPN_Gateway group setup with the appropriate Tier1/Tier2 gateways and policy based routing via Firewall rules on LAN... all of that is working fine...

I am working on moving away from PiHole on separate device to AdGuard Home on the OpnSense... I have everything working EXCEPT I cannot figure out how to route the DNS queries from AdGuard to public DNS via the VPN_Gateway group (or even via a specific VPN gateway)... For PiHole (separate device on LAN) I just needed a rule with source being PiHole IP... But, for AdGuard (on the firewall itself), I can't get a rule to work (LAN or even floating)...

I can see queries going out in the live view of Firewall logs (via the "let out anything from firewall host itself" rule) and it shows ">WAN {LAN interface IP} {upstream DNS IP}" ...

I've tried rules on LAN, WAN, and floating... I fear I'm missing something silly... Hopefully this is in fact something simple... I don't fully understand the relationship between AdGuard and OpnSense with AGH running on the device itself... but it does everything I want, the way I want, except for routing the upstream queries over the VPN (preferably using a VPN Gateway group that load balances those tunnels)...

FYI: I am using a DNS-over-TLS connection to the upstream DNS servers... but I want to obfuscate both ends... DoT insures the payload of query/answer is not intercepted by ISP or snoopers in the routed path... VPN insures upstream DNS is not aware of the true origin of the query and load balancing across 2 VPN tunnels creates further obfuscation as well as redundancy (fault tolerance)... Again, I had this all working with PiHole (which sent queries through firewall via a DoH proxy on the PiHole using docker with PiHole & Cloudflaird) so the only real difficulty is that the LAN rule I was using for that does not seem to work with AdGuard running on the firewall itself...



#7
I have multiple VPN clients configured to connect to separate servers in different areas (US-Atlanta, CA-Vancouver, UK-London, etc.) and want to route traffic via these specific tunnels (exit points) based on source network or, in some cases, service/ports...

All these clients are using the same VPN provider and while each connection gets assigned a different virtual IP (10.x.x.y) all connections appear to be assigned the same gateway IP (10.x.x.1) no matter which server is being connected to...

The gateway created for the 1st VPN interface will display the IP for the virtual network (10.x.x.1) but the 2nd connection will display no IP for the VPN gateway even though the connection is "up" and all other aspects appear normal and functional...

I'm assuming this behavior may be related to multiple tunnels being on the same "virtual subnet" and all having the same IP for upstream gateway (i.e. VPN1 = 10.1.2.101, VPN2 = 10.1.2.102, Gateway for both = 10.1.2.1)...

I'm not sure if this behavior is normal/expected, if I've found a bug or limitation, or if this setup is just not viable on OpnSense...

Has anyone set something like this up using a public VPN provider?

All connections are OpenVPN using UDP and each connection "works" as long as I only try to use one at a time... Is there any work-around for this scenario? Is there, for example, a way to route via the assigned VPN interface instead of by gateway?

#8
@MKS - sounds like a good approach. thanks for the tips!
#9
So what is the general practice for 'root'? Since the ID cannot be changed (as far as I can tell) and I'd tend to 'assume' you want at lest one account that is not tied to TOTP just in case that fails... Do you just set a really complex random password? I've already done that and setup a separate admin login for myself (have not fully tested with TOPT, yet)... On other systems I've generally changed 'root' (or factory default admin account) to some other ID whenever possible...
#10
Couple of things here from setting up my own multi-wan configuration recently (and others with more OpnSense experience, please correct me if I've got something wrong)...

First, If your goal, aside from routing SMTP for Postfix, is to have all traffic use WAN2 and only use WAN1 if WAN2 is down then I think you may need to reverse your gateway setup:

WAN2 would have the higher weight of 5 and be Tier1
WAN1 would have the lower weight of 1 and be Tier5 (or even Tier2)

If your goal is to load balance such that the faster and slower link are aggregated with the fast link being used much more often then the slower link then you'd want both WAN1 and WAN2 gateways on the same Tier but with the greater weight given to the faster link... I would personally tend to recommend against that for a couple of reasons: (A) the ratio of 10 megabit to 150 megabit is just to wide a gap to effectively load balance, and (B) the "load balancing" in OpnSense is somewhat limited since, as I think I understand it, it is basically just doing a "weighted round-robin on new connections". Thus my opinion, for whatever that is worth, is that you'd be best setting your slower connection up to be fail-over only (Tier5), having your main Firewall rule push traffic through your routing group, and then writing rules that override that for traffic you want to use the slower link for on purpose...

Which brings me to your original question...

I'm not familiar with Postfix specifically or setting that up to run on the OpnSense itself. However, if you can configure it as bound to the LAN and assuming that configuration would cause OpnSense to treat it as traffic originated on the LAN (as opposed to seeing it as some alternate form of "internal" traffic) then you should be able to create a single rule, in Firewall / Rules / LAN (placed ABOVE the general rule routing traffic via the gateway group) that routes SMTP over WAN1...

You can see in the attached snapshot I've done something similar to cause all traffic headed to DYNU to utilize WAN2 (that, in my case, is the link that has the dynamic IP I'm trying to update)... Note that this rule must be placed ABOVE the rule with the gateway group so it acts as an "override"...

Hope that helps.

#11
I run a multi-wan setup with a 300 megabit and a 1 gigabit link. I have 2 separate VPN providers with the client configuration bound to their respective WAN interfaces. I use a Gateway Group and LAN firewall rule to direct traffic down the tunnels on each WAN link in a load balanced configuration with the respective gateways configured to use "weight" to balance more load toward the gigabit link, etc. I have not turned on Sticky Connections in Firewall/Settings/Advanced...

This setup has worked well through testing and produced good numbers via speedtest.net and, until now, I've not had problem with websites other than those that actively try to detect and/or block use of VPN providers (or block due to incorrect GeoIP data). In other words, the load balancing and potentially changing IPs under the covers have not presented a problem, in general.

However, I've now run into a single website (retirement fund custodian) that uses OKTA for MFA and frequently either fails the login process or kicks me off the session... After further research I believe this is because the mechanism they have setup is not tolerant of IP source changes during the session, etc.

Using Sticky Connections should resolve the issue for the site in question but it will also prevent bandwidth aggregation and decrease the benefit of load balancing for all other sites since it is a global (all or nothing) setting... So I'm now trying to brainstorm a solution that would allow me to resolve the issue for the site in question without losing the benefits for all the other sites...

I know I can configure firewall rules such that a particular source IP bypasses the load balancing and is sent down only one specific tunnel... In theory, this could be done for a destination IP as well but the trouble is websites like this have multiple IPs and tend to reference other sites with multiple IPs. I know I can create an alias that is populated via DNS lookup as well but at the very least I'd have to determine all FQDNs used by both this site and OKTA to resolve the issue that way...

So can anyone else think of a creative solution where I don't have to enable sticky connection for all traffic but can force traffic using this site to be sticky (or to only traverse one of the two WAN/VPN routes)?

Looking for something as close to "best of both worlds" as possible here...

Thanks!
#12
Hello,

I'm wondering if there is a guide or check list (or set of bullet point recommendations) around securing OpnSense for full production deployment. There are obvious things like not turning on SSH (or making sure it is setup to be super secure and only accessible via the LAN), setting a strong password on root, possibly setting up MFA, etc. but I'm wondering if there are more considerations and/or if anyone has or knows of content that addresses this question specifically.

Also curious if most of you access the OpnSense using 'root' or if you tend to lock that down and access the firewall using a separate admin login once in production?
#13
I think I have a configuration that is 'functional' for my purpose at this point (see below). However, I'd like to have control over the Reporting section (specifically to be able to remove access to the Reporting:Settings page) and I'd like to be able to allow viewing of Services without having to allow Start/Stop/Reset...

Also, after more research it appears the "User System: Deny config write" privilege may be deprecated as a warning is presented when selected it that it may be removed in a future release...

This thread discusses the topic 2 years ago, before the release of 19.7 and indicates that, from a development perspective, "The 'privilege' to take away privilege is deeply flawed from the get go..."
https://forum.opnsense.org/index.php?topic=12039.0

This makes we wonder what the future is for creating/maintaining this type of access... While it might not be something used often for home lab environments I would think there is a fair amount of merit for it's use in various company / production environments. So, I'd be pleased to know what the developers think about this and what direction they see taking on it in the future.

Below is what I've come up with thus far...

Type   Name
GUI   Lobby: Login / Logout / Dashboard
GUI   Dashboard (widgets only)
GUI   Diagnostics: ARP Table
GUI   Diagnostics: Configuration History
GUI   Diagnostics: Logs: Firewall: Live View
GUI   Diagnostics: Logs: Firewall: Plain View
GUI   Diagnostics: Logs: Firewall: Summary View
GUI   Diagnostics: Logs: Gateways
GUI   Diagnostics: Logs: System
GUI   Diagnostics: Netstat
GUI   Diagnostics: Network Insight
GUI   Diagnostics: Packet Capture
GUI   Diagnostics: PF Table IP addresses
GUI   Diagnostics: pfInfo
GUI   Diagnostics: pfTop
GUI   Diagnostics: Ping
GUI   Diagnostics: Routing tables
GUI   Diagnostics: Show States
GUI   Diagnostics: States Summary
GUI   Diagnostics: System Activity
GUI   Diagnostics: System Health
GUI   Diagnostics: Test Port
GUI   Diagnostics: Traceroute
GUI   Firewall: NAT: Outbound
GUI   Firewall: Rules
GUI   Status: DHCP leases
GUI   Status: Interfaces
GUI   Status: IPsec
GUI   Status: NTP
GUI   Status: OpenVPN
GUI   Status: Services
GUI   Status: System logs: IPsec VPN
GUI   Status: System logs: NTP
GUI   Status: System logs: OpenVPN
GUI   Status: System logs: Routing
GUI   Status: Traffic Graph
User   System: Deny config write
GUI   System: Gateway Groups
GUI   System: Gateways

#14
Hey everyone.

I'm new to the forum and new to OpnSense (but not new to firewalls, networking, etc.) and this is my first post here. I have done some searching both via google and on the forums here and was surprised not to find much on this topic. I hope this is the correct place to ask this question and that I have not missed something obvious either in the configuration or in my searches...

I've been working with my nephew to deploy OpnSense for a "home lab" scenario. This started as an exploration of open source firewall alternatives and now I'm ready to put something into "production" with real traffic and devices behind it and, as such, I'm starting to lock things down and harden the configuration...

As part of this I wanted to configure an admin user (for my nephew) with read only privileges that can view all of the pages and logs and such but cannot make changes to the configuration. I did this for him on my SonicWALL originally so he could learn stuff and even help me troubleshoot (but without me having to worry about him making unauthorized changes or playing whack-a-mole trying to solve a problem)...

So far, I've created a group called "view" and started setting up GUI privileges but it seems like certain pages allow editing or seem to be all-or-nothing (see and edit or don't see at all)... Perhaps I don't understand exactly how this works or what the limitations are but before I spent too much more time on it I thought I'd ask if there is a guide or set of recommendations for creating a read only "admin" that can see everything but not change it.

Perhaps there is a simple way of doing that I'm just missing?

Also, on a somewhat related note, I'm looking for a guide on how best to harden the OpnSense configuration. I've set a strong password on root, created a separate user with full admin privileges for myself, made sure SSH is not enabled, etc. but I'm still fairly new to OpnSense and not feeling 100% confident I have not missed something. I've also not attempted to setup MFA at this point (yet).

Thanks.