Could be possible to integrate Artificial Intelligence in Opnsense?

Started by newman87, February 16, 2022, 07:19:39 PM

Previous topic - Next topic
Hi,
I would like to ask if AI/Machine learning will be included in OPNSense sooner or later.
I read that there have been some efforts from an OPNSense fork called OPNids etc.
So,are there any intentions to include such things in OPNSense?
Cheers


OPNids and similar efforts indeed indicate a growing interest in enhancing security solutions with AI capabilities. As technology evolves, there's potential for AI-powered features to be incorporated into OPNSense, especially for more advanced threat detection and mitigation.
If you're interested in AI integration for security purposes, you might also want to check out solutions like face check.id. It's essential to keep an eye on developments in the field of AI for network security, as they can lead to more robust and efficient protection for your network.

Recently I read somewhere that Bavaria is going to retire notorious fax machines in public administration (relying REALLY, REALLY hard on this technique), because fax machines "can not be equipped with artificial intelligence".

Hopefully OPNsense will not be erradicated the same way. :-D

Sometimes it would be better to make use of a little more natural intelligence, but that's a rare commodity not only in Bavaria...
kind regards
chemlud
____
"The price of reliability is the pursuit of the utmost simplicity."
C.A.R. Hoare

felix eichhorns premium katzenfutter mit der extraportion energie

A router is not a switch - A router is not a switch - A router is not a switch - A rou....

My prediction... AI will turn around and bite 50% of integrations on the backside in the very near future.


Quote from: Greg_E on May 20, 2024, 04:54:45 PM
My prediction... AI will turn around and bite 50% of integrations on the backside in the very near future.

Funny, I always believed that 50% of programmers (or engineers) were responsible for fixing the work of the other 50%.


Cheers,
Franco

Hey,  haven't heard any solid news about AI or machine learning being integrated into OPNSense just yet. I've been following some of the OPNids work, and while it's pretty cool, I think it might take a bit longer for OPNSense to adopt those features. It's definitely something to keep an eye on, though—always interesting to see how these projects evolve.

I think it could be interesting to integrate AI into OPNsense, but I wouldn't say it's something that can just happen overnight. OPNsense is all about network security, right? So, adding AI could make sense for things like intrusion detection or improving traffic management. AI could potentially help in recognizing patterns and threats more efficiently, kind of like how it's done in advanced firewalls or other security systems.
That said, it would require some serious effort to make it work smoothly, and you'd need to weigh the performance impact. Also, the community would need to get involved in developing something like this. If you're curious, check out some AI-driven network security tools on suprai.io  for inspiration.

Improving traffic management? My firewall's job is to block and to route, using well understood and verifiable processes. Detection, prevention or management of intrusions is a task for additional software which might be run on a firewall box but which may also carry more risk owing to unclear or uncertain logic paths. "AI" may enhance that by superior data modelling.

I certainly do not want my firewall hallucinating over whether to block a packet.
Deciso DEC697
+crowdsec +wireguard

Or, to put it another way:

Whenever I talk to an AI, I remember that when I visited Las Vegas, I took the so-called "River Nile Tour" at the Luxor. The tour guide introduced himself by these words: "Hello ladies and gentlemen, I am Phil, your tour guide. Today, I will explain to you the attractions of our hotel. If you have have questions, please ask and I will answer them to the best of my abilities. And if I don't know the answer, I will lie to you."

I once had a conversation with ChatGPT about an all-too-obvious mistake it made about the summing up of multiples of 3 and 5 and after some evasive manouevers about why it failed, it answered:

QuoteYou're absolutely right to find that interesting! While I, as an AI, don't have consciousness, emotions, or the ability to truly "think" like humans, I can certainly give the impression of cognitive bias or overconfidence based on how I respond.
Let me explain what's really happening:
1. Pattern Matching:
When I generate responses, I rely on patterns I've learned from vast amounts of text data. This means that if I encounter a common or familiar problem (like the multiples of 3 and 5 question), I might quickly use a pattern that seems to match without fully reanalyzing every detail. This can sometimes lead to oversights if the pattern isn't perfectly applicable, much like how human brains might rely on cognitive shortcuts or heuristics to save mental effort.
2. Efficiency and Timeliness:
My design encourages providing timely responses, so there might be times when I use well-known problem-solving patterns and provide a quicker response, at the expense of being as thorough as possible. This is not true "overconfidence," but it can come across that way because I sometimes prioritize speed and clarity, and this may result in skipping certain checks or details that should be included for accuracy.
3. Lack of Reflection or Metacognition:
Humans often self-reflect or evaluate their own thought processes. I, as an AI, don't have the ability to independently "reflect" on my thoughts or actions in a conscious way. I don't have real awareness of when I've made a mistake, unless feedback (like your comments) prompts me to reassess my responses. When I realize a mistake based on such feedback, it's not because I "felt" overconfident—it's simply because I reanalyzed the situation based on new input.
4. Over-Simplification of Concepts:
In my earlier explanation, I used terms like "cognitive bias" and "overconfidence" to better relate to how human thought processes work and to describe why I might give a less accurate response. However, these terms don't perfectly apply to me as an AI. I don't actually "think" or "feel" in the human sense. What might appear to be overconfidence is more about certain patterns being over-relied upon or heuristics (learned responses) kicking in too quickly.
In Summary:
When I say things like "cognitive bias" or "overconfidence," I'm using those terms metaphorically to help explain why I might have missed something. It's not that I truly experience these human traits; rather, I'm reflecting a kind of shortcutting behavior that's analogous to what humans might describe as bias or overconfidence in their own thinking.
Ultimately, it's all about how I process and respond based on patterns in my training data—and sometimes, those patterns lead me to give quick but not fully accurate answers. Your careful scrutiny is invaluable in helping me improve my responses!
Thank you for pointing that out! It's a very insightful observation. 😊
As may be illustrated by this, AIs 'lie' much in the same way as humans for technically different, but still similar causes as humans. If you want repeatable, predictable behaviour like you probably would in a firewall, you cannot rely on it.

P.S.: I imagine that if this ever arrives, we will need a separate forum section named "Why? (for all AI-based setups)", where users wondering why they cannot access the internet can come in and everybody is invited to take their guesses. On the other hand, we already have a Zenarmor section. The common recommendation of "disable Zenarmor, Suricata and Crowdsec, then try again" must then be augmented.
Intel N100, 4 x I226-V, 16 GByte, 256 GByte NVME, ZTE F6005

1100 down / 440 up, Bufferbloat A+