Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - deajan

#1
honestly, I've fiddled around with like all possible solutions, tried on qemu v8 and v9, with various cpus.
Finally, I came up with this solution to add in `/boot/loader.conf` to get good performance.

```
hw.vtnet.X.tso_disable="1"
hw.vtnet.tso_disable="1"
hw.vtnet.lro_disable="1"
hw.vtnet.X.lro_disable="1"
hw.vtnet.csum_disable="1"
hw.vtnet.X.csum_disable="1"
```

Took the solution from https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059
Even with those settings, speed is good but far from what it should be (other people had the same https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059#c44).

Since I didn't check this bug report up for a while, I noticed that vtnet offload improvements landed in freebsd only a couple of days ago.
Perhaps things will be better now. I'll definitly need some testing.

In the meantime, I have a made that basic script to "cold" modify / inject data into an offline opnsense VM that solves an issue that may be gone soon.
#2
Indeed, but with virtio, I previously had a -300% throughput difference once I use suricata, hence the reason I passthrough physical interfaces.
Perhaps this is solved more recently ?
#3
I've written a (quick and dirty) script to handle a specific scenario for my OPNSense VM.

I'm hosting OPNSense on a KVM hypervisor with 10Gb NICs (named ixl0..9).
I backup that VM every night, and restore it on a second much smaller hypervisor which has 1Gb NICs (named igb0..9).
This is done because I don't want to handle a high availability scenario which is a quite insane amount of work (every interface needs a VRRP, and some of the plugins aren't HA aware, meaning more work).

Anyway, once I've restored the VM on my spare device, it won't work unless I rename my interfaces from ixl to igb, which makes sense.
In order to make things more smooth, I came up with (perhaps a bad) idea:

1. Mount restored disk image locally on backup KVM hypervisor
2. Replace interface names in config.xml
3. Unmount disk image
4. Profit !

So far, I came up with a script to handle this.
The script supposes that the host hypervisor has a running ZFS implementation, and can mount qemu disk images via qemu-nbd.

I'm pretty sure there might be better solutions, perhaps you could tell me how you manage cross hypervisor OPNsense backup/restore scenarios ?

Anyway, please find the script at https://github.com/deajan/linuxscripts/blob/master/virsh/offline_rename_opnsense_interfaces.sh

Any feedback is appreciated.
#4
General Discussion / Re: Need a bit help on IPv6 routing
September 15, 2025, 04:11:31 PM
Makes perfect sense :)
Thank you.
#5
General Discussion / Re: Need a bit help on IPv6 routing
September 15, 2025, 03:57:39 PM
Thanks for the reply.
I guess that means that if I setup a /64 on WAN, I will definitly need prefix delegation in order to get multiple /64 subnets, hence configure a DHCPv6 server on the BGP routers ?

Any perhaps "simpler" way to tell OPNSense that it's allowed to use the whole /48 net ?
#6
General Discussion / Need a bit help on IPv6 routing
September 15, 2025, 03:44:43 PM
Hello,

I'm an OPNsense user since almost 10 years now, and use it at home & work.
I've setup a couple of home ipv6 networks, where I had to use NPTv6 since ISP wouldn't hand out a bigger than /64 prefix delegation, so I have basic IPv6 knowledge.

My problem today is quite different.
I've setup bgp routers which have IPv4 and IPv6 sessions.
My bgp routers are bridged to my OPNSense, which has a DMZ vlan interface in which I have a couple of servers.

So far, my setup looks like:


(BGP-Router(s))--------[bridge]----------(OPNSense WAN_________OPNSense DMZ)----------[bridge]-------------[VM]
2001:X:Y:0::254/48 (VRRP)               2001:X:Y:0:1/48        2001:X:Y:FF:254/64                          2001:X:Y:FF::1/64     
2001:X:Y:0::253/48 (RTR1)               GW 2001:X:Y:0:254                                                  GW 2001:X:Y:FF:254
2001:X:Y:0::252/48 (RTR2)                   

I've setup the wan address as static since I don't plan on running a dhcp server on bgp routers.
I've setup the dmz address as static too, as well as the VM.

I can't ping the BGP routers from the VM (traceroute shows that it stops at opnsense). Ping to OPNsense works.
I can ping both the BGP routers and the VM from OPNSense.
I can ping OPNSense from the BGP routers.

I came to the conclusion that OPNSense doesn't route IPv6 from DMZ to WAN interface.
I did of course setup a ipv4/ipv6 any to any rule on DMZ interface for my tests.
I've also checked that ipv6 forwarding is enabled via:
```
# sysctl net.inet6.ip6.forwarding
net.inet6.ip6.forwarding: 1
```

My IPv6 routing table looks sane to me:
```
netstat -nr

[ipv4...]

Internet6:
Destination                       Gateway                       Flags         Netif Expire
default                           2001:X:Y:0::254   UGS          vtnet4
::1                               link#6                        UHS             lo0
2001:X:Y::/48                 link#5                        U            vtnet4
2001:X:Y:0::1         link#6                        UHS             lo0
2001:X:Y:FF::/64            link#25                       U            vlan04
2001:X:Y:FF::254            link#6                        UHS             lo0
fe80::%vtnet4/64                  link#5                        U            vtnet4
fe80::5054:ff:feb9:2fc7%lo0       link#6                        UHS             lo0
fe80::%lo0/64                     link#6                        U               lo0
fe80::1%lo0                       link#6                        UHS             lo0
fe80::%vlan04/64                  link#25                       U            vlan04
fe80::5054:ff:fe32:b0eb%lo0       link#6                        UHS             lo0
```

All my IPv4 networking works, so my problem looks really IPv6-only.
Since I've setup a /48 on WAN and a /64 on DMZ, is there anything else I should have configured except the firewall rule ?

Also, side question, since my VM in DMZ interface will be publicly accessible and have an AAAA record, I configured it with a static ipv6.
Is that a "good practice", or should I go a SLAAC / DHCP6 way ? If so, doesn't that make things more complicated when trying to find it's IP to setup AAAA records ?

Thanks for any insight.
#7
At least they finally did something, and responded on bugzilla.
My last bug report really isn't worth their time... https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=280615
#8
Can confirm that 25.7rc2 fixes the issue for me.

Thank you guys, this made me question my network admin skills and/or sanity ;)
#9
@meyergru: Thanks for the tip.
I've updated to 25.7rc2 and indeed the workaround fix from OPNsense team works.

I'm happy that there is a logical explanation, as I was just questioning my sanity :)
#10
I use to deploy OPNsense on AlmaLinux with KVM, so I can backup / restore the whole VM in a couple of minutes.
Got very good results for years. Ping me if you need some advices (especially when not using PCI-passthrough for performance).
#11
Hello,

I'm facing a real WTF situation and really could need some advice here.
I have a OPNSense FW (25.1.10), and cannot wrap my head around my issue.

Randomly, pings to 8.8.8.8 or 1.1.1.1 get blocked by the firewall.
From the same LAN subnet, some computers can ping, others can't.
After a while, other computers can't ping, and some that couldn't now can.

When looking at the firewall logs, the ICMP packets that get blocked are processed by the same rule as that passes other ICMP packets.
When clicking on the rule link, the window closes since the rule isn't found (see the attached screenshots).

In order to outrule some factors, I've:
- Disabled Zenarmor
- Disabled Suricata
- Disabled Crowdsec
- Marked my default gateway as always on (no monitoring)

When I ping 1.1.1.1 or 8.8.8.8 from the firewall itself, of course both respond.

I've also tried to check the Disable reply-to wan setting in firewall>settings>advanced.
I've checked that I don't have any special routes to 1.1.1.1 nor 8.8.8.8

This happened already with 24.7 series as far as I can remember.
I've just updated 25.1.10 to 25.1.11 and a computer that could ping 8.8.8.8 but couldn't ping 1.1.1.1 can now ping 1.1.1.1 but can't ping 8.8.8.8 anymore.
I've also exported my opnsense config file and searched for those IPs in order to make sure I didn't forget anything.
The only entry I found for 1.1.1.1 is for query forwarding in unbound.

I'm totally puzzled as why this is random.
I'd be grateful for any clue where to search for.

Thank you.
#12
I've also encountered multiple (and strange) resolve errors with unbound like the following:


2024-09-18T13:46:40 Error unbound [54445:2] error: SERVFAIL <somedomain.tld. A IN>: all servers for this domain failed, at zone somedomain.tld. upstream server timeout
2024-09-18T13:43:36 Error unbound [17415:1] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: all servers for this domain failed, at zone 64.92.188.in-addr.arpa. no server to query no addresses for nameservers
2024-09-18T13:43:36 Error unbound [17415:0] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: exceeded the maximum nameserver nxdomains
2024-09-18T13:43:30 Error unbound [17415:3] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: exceeded the maximum nameserver nxdomains
2024-09-18T13:33:04 Error unbound [17415:3] error: SERVFAIL <85.21.107.40.zen.spamhaus.org. A IN>: exceeded the maximum nameserver nxdomains
2024-09-18T13:32:26 Error unbound [17415:2] error: SERVFAIL <somedomain.tld. A IN>: all servers for this domain failed, at zone somedomain.tld. from 194.0.34.53 no server to query nameserver addresses not usable


After reading alot of documentation, one guy said that ISPs may tamper with DNS.
In my setup, I've got 3 internet providers, so I configured Unbound to use WAN1 only, then WAN2 then WAN3.
While doing dns requests, I noticed that WAN1 provider tampered (probably) with DNS since both WAN2 and WAN3 produced good results, but WAN1 didn't.

Hopefully this might help some other people.
#13
I've also encountered multiple (and strange) resolve errors with unbound like the following:

```
2024-09-18T13:46:40   Error   unbound   [54445:2] error: SERVFAIL <somedomain.tld. A IN>: all servers for this domain failed, at zone somedomain.tld. upstream server timeout   
2024-09-18T13:43:36   Error   unbound   [17415:1] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: all servers for this domain failed, at zone 64.92.188.in-addr.arpa. no server to query no addresses for nameservers   
2024-09-18T13:43:36   Error   unbound   [17415:0] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: exceeded the maximum nameserver nxdomains   
2024-09-18T13:43:30   Error   unbound   [17415:3] error: SERVFAIL <xx.xx.xx.xx.in-addr.arpa. PTR IN>: exceeded the maximum nameserver nxdomains   
2024-09-18T13:33:04   Error   unbound   [17415:3] error: SERVFAIL <85.21.107.40.zen.spamhaus.org. A IN>: exceeded the maximum nameserver nxdomains   
2024-09-18T13:32:26   Error   unbound   [17415:2] error: SERVFAIL <somedomain.tld. A IN>: all servers for this domain failed, at zone somedomain.tld. from 194.0.34.53 no server to query nameserver addresses not usable
```

After reading alot of documentation, one guy said that ISPs may tamper with DNS.
In my setup, I've got 3 internet providers, so I configured Unbound to use WAN1 only, then WAN2 then WAN3.
While doing dns requests, I noticed that WAN1 provider tampered (probably) with DNS since both WAN2 and WAN3 produced good results, but WAN1 didn't.

Hopefully this might help some other people.


#14
Found where to add the fsfreeze hook, see https://github.com/opnsense/core/issues/7681

Nevertheless, there is a bug in qemu-guest-agent that doesn't launch thaw script properly :(
#15
So I finally tried a last solution posted on a FreeBSD forum https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059

Setting the following in `/boot/loader.conf` solved the issue:

Adding parameters to /boot/loader.conf


#hw.vtnet.X.tso_disable="1"
hw.vtnet.tso_disable="1"
hw.vtnet.lro_disable="1"
#hw.vtnet.X.lro_disable="1"
hw.vtnet.csum_disable="1"
#hw.vtnet.X.csum_disable="1"


I've jumped from remote uploads at 20-30Mbits to a whopping 300Mbits ^^
I've previously tried this with ethtool without success.
Also, I had to reboot in order for the changes to take effect.