OPNsense
  • Home
  • Help
  • Search
  • Login
  • Register

  • OPNsense Forum »
  • Profile of J. Lambrecht »
  • Show Posts »
  • Messages
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts...
      • Messages
      • Topics
      • Attachments

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments

Messages - J. Lambrecht

Pages: [1] 2
1
22.1 Production Series / Re: request for help with: single public IP, a bridge, two opensense-fw VM > VMs
« on: February 12, 2022, 11:28:11 am »
Thanks for sharing. That's roughly how i'm going about it.

By now i've found renting an extra public-IP is affordable and i've assigned this extra public IP to a bridge interface which is now exposed to the VM as a routed network interface (qemu/KVM)

The opnsense-VM appear to be running as expected in HA mode using carp. Now i want to add the IP assigned to the bridge interface as a HA IP to which i can bind various services.

so the set-up is now: [ public IP #1 ]-[ eth0 ] -> [bridge]-[public IP #2] -> [ opnsenseVM]

the PIP#2 is reachable from the internet but the traffic does not show in opnsense-VM

i understand this is becasue the PIP#2 responds to the external traffic arriving over PIP#1 but i do not understand in what set-up PIP#2 is 'owned' by the opnsense-VM cluster

2
22.1 Production Series / request for help with: single public IP, a bridge, two opensense-fw VM > VMs
« on: February 11, 2022, 12:56:22 am »
hey

thanks for taking a little bit of time to share your thoughts


I have this server at my disposal yet just one public IP

The server is a dual CPU 8c/16t with plenty of RAM and disk

the set-up i have in mind is    [ pubic IP] > [virbr0, virbr1, virbr2] > ( opensense-fw-1, opensense-fw-2) > virtual-LAN > VM1...N
on VM1..N there will be just a few VM running services

so, now  i have the public IP to which i configure DNS to resolve and i want to have this traffic arrive at both of VM1..N on different ports

to this end i expected to use the public-IP a a VIP-WAN but now i' m not certain if the ssh service running on the VM-host will still be reachable if i do so

or for that matter, if i could have the opnsense-ha-cluster correctly resolve the DNS and match with the hosts behind the NAT










3
20.7 Legacy Series / Re: repeat crashing
« on: November 09, 2020, 11:42:04 am »
Quote from: Gauss23 on October 31, 2020, 07:16:18 am
Sensei or Suricata enabled?

hey, why did you ask ?

4
20.7 Legacy Series / Re: repeat crashing
« on: October 31, 2020, 10:07:04 pm »
Quote from: Gauss23 on October 31, 2020, 07:16:18 am
Sensei or Suricata enabled?

Suricata only and detection only

5
20.7 Legacy Series / repeat crashing
« on: October 31, 2020, 06:56:12 am »
dear, it is with a sense of dread i write this post as it concerns opnsense going haywire repeatedly


the logs reviewed thus far do not contain a clear indicator of what happened, but it never happens once, this is time three in just two weeks. For all cases over time the scheduled updates for suricata appear to correlate in time, except today and this time it is even more bad than just unbound and suricata dying.


i've now initiated the update for 20.7.4 which i had not done before since it only presented a pkg update


my question is if others experience such crashes as well, my concern is it may not be just instability of opnsense but an external factor. if so, there are not indicators left in the logs

6
20.7 Legacy Series / Re: unstable on proxmox ? [ partially SOLVED ]
« on: October 07, 2020, 06:00:51 pm »
Quote from: J. Lambrecht on October 02, 2020, 10:12:53 pm
Think i cracked the problem.

core issue


1) dhcp scope did have a gateway set but not a router
2) manually setting dhcp option 3 to type IP and the ip address for the LAN interface appears to work

depending issues

1) IDS crash on rule update fail = to all appearances, is fixed now (crash because of DNS fail !)
2) unbound flapping = improvement, not fixed

This approach did indeed remediate all issues mentioned.

Unbound remains flakey due to some configuration glitch between proxmox and unbound wrt route preferences.


7
20.7 Legacy Series / Re: Backup - restore 20.1.9 in 20.7.3/4?
« on: October 02, 2020, 10:28:38 pm »
Quote from: GreenMatter on September 29, 2020, 10:20:26 pm
Yes, I could have deployed test instance of OPNsense. But before doing so, I would like to know what to expect and which way is better one  8)


well eh, deploying a throw away VM with 20.7 is the prferrable way if you are curious



8
20.7 Legacy Series / Re: unstable on proxmox ? [ partially SOLVED ]
« on: October 02, 2020, 10:12:53 pm »
Think i cracked the problem.

core issue


1) dhcp scope did have a gateway set but not
2) manually setting dhcp option 3 to type IP and the ip address for the LAN interface appears to work

depending issues

1) IDS crash on rule update fail = to all appearances, is fixed now
2) unbound flapping = improvement, not fixed



9
20.7 Legacy Series / Re: Backup - restore 20.1.9 in 20.7.3/4?
« on: September 29, 2020, 10:10:15 pm »
If you have the ability to deploy a test machine with 20.7 i'd go ahead with that, it appears to not yet have been fully stabilzed in some ways

10
20.7 Legacy Series / Re: Call for testing: official netmap kernel
« on: September 29, 2020, 03:13:28 pm »
Quote from: gauthig on September 29, 2020, 03:07:42 am
By slow I mean a drop from 1.95Gbs to 0.915Gbs, 50% reduction.
In 20.1.x I was seeing about 1.7Gbs, so much less drop when netmap enabled.

I only showed the Suricata on LAN.  I'll re-run with sensei normal and bypass more and send the results.   By the way, my ELK stack is on another ESXI with a 10Gbs link, so the ELK CPU/Memory load will not impact opnsense/sensei.

this is normal for an IDS, it inspects every packet, if you enable all rules this is even optimistic. disabling some rules may show a noticeable performance increase with IDS enabled

11
20.7 Legacy Series / dns queries return 0.0.0.0 as adress, no blacklist enabled
« on: September 29, 2020, 03:09:05 pm »
  • with Unbound service there are recurrent issues where the service simply stops responding
  • dns lookups from the opnsense web-ui from any interface work as per normal
  • this makes me think the problem does originate within unbound
Validating the unbound configuration i could not find any blacklist enabled. Rebooting opnsense i could find the domains which return 0.0.0.0 as address briefly do resolve correctly. The IDS was not enabled at the time since it had crashed once more, also when disabling the IDS there was no change observed for the dns queries erroneous results.





12
20.7 Legacy Series / Re: unstable on proxmox
« on: September 28, 2020, 11:46:42 pm »
Hey Mark,

This time i got lucky, so to speak. The opnsense VM went all goobly goo again.

The IDS service crashed and rebooting showed a massive amount of errors and flaws. The fw had been running peachy for hours upto the mistake of assigning an invalid ip as dns server in a dhcp scope.

It is the only change i can think of that happened at the time. The console was again filed with swap fail messages.What happened hours before is i had


1) enabled the 2GB swap space flag to make sure i  would not have any memory issues. The VM has 2.5GB of ram to run dhcpd, suricate, ntpd, unbound which i think should be adequate. Since the services only appear to crash on memory depletion enable swap seemed to be a good idea.

2) set the VM to run with SEABIOS and 440fx (i just noticed it had QXL set as displa which i don't think is sensible but it I have now powered off the opnsense VM and assigned virtio/scsi single and have set display to standard vga.

If anything goes wrong again it will take more hours for this to happen. What i do notice is during this time the memory consumption soars from around 800MB to 2.1GB and more.



13
20.7 Legacy Series / unstable on proxmox ?
« on: September 27, 2020, 07:10:21 pm »
Dear,

Using opnsense since release 17 or so i find it unstable to work with on Proxmox VE 6.2

the disk i/o is troublesome to the point only selecting IDE with SSD emulation appears to work well (for speed), choosing a differen kind of controller results in a lot of swap fail notification.

on shutdown there are a plethora of errors thrown which appear low level, regardless of the controller chosen

in all i don't feel like 20.7 is as production ready as one typically assumes

memory consumption appears quite high out of the box, the VM has 2.5GB of ram and frequently starts complaining it is out of swap space shutting down multiple services without warning

14
Intrusion Detection and Prevention / Flowbit rules and no alert
« on: November 09, 2018, 05:53:45 pm »
Dear,

Confronted with Zberp being reported as originating from my SmartTV reaching in relation to Netflix traffic (yes, port 80) I came to look at Suricata SID 2021831 which is a flowbits:noalert rule

It took me a while and had to ask but someone pointed out this rule is not supposed to trigger since it is a flowbits rule for which no alert is configured. Hence i wondered if this (most likely) is my mistake of enabling such rule or if this is a known error in the suricata configuration with OPNSense.

Thank you

15
Dutch - Nederlands / Re: Ping naar WAN interface werkt niet.
« on: November 02, 2018, 10:44:37 pm »
Staat reply-to WAN niet gewoon uit ?

Pages: [1] 2
OPNsense is an OSS project © Deciso B.V. 2015 - 2022 All rights reserved
  • SMF 2.0.18 | SMF © 2021, Simple Machines
    Privacy Policy
    | XHTML | RSS | WAP2