Instead of using the ntpd in client mode, it's just easy to create an actions conf for ntpdate, then cron it 4x daily.
After you create the action do "service configd restart", then it shows up in cron area of gui. In cron you enter "-s [ntp dns name]" for parameter.
My /usr/local/opnsense/service/conf/actions.d/actions_timeupdate.conf
[sync]
command:/usr/local/sbin/ntpdate
parameters:%s
type:script
message:Syncing time with %s
description:Update local time
Sure, but why?
Very bad idea. A continuous update is way better than sudden jumps. See this for a striking example (https://doc.dovecot.org/2.3/admin_manual/errors/time_moved_backwards/) of why you should never use periodic ntpdate.
I address both.
Why? because no need to run a daemon that needs way more resources.
Issues? Not sure, I have used ntpdate in cron for many years across various nix OS's in various environments. I never did experience an issue.
If you do that the time might go backwards. This must never happen by definition on a Unix system. The running daemon adjusts the clock in a way that time only moves forward. That's the reason why ntpd exists. Resources well spent.
Well spoken, Patrick.
@Brandywine: You may not have, yet, but I actually did - and it was exactly the Dovecot issue that I linked to. It took me days to find why I got no more E-Mails. Backward jumps are the more common case with ntpdate, since most computer clocks are too fast, because nobody ever spents the money on a variable ballast capacitor on those clock crystals.
When you look for "backward time jump problem", you will find dozens of examples of services that do not "like" this, including Asterisk (https://community.asterisk.org/t/backwards-time-leap-issue/108173) and dozens of others (https://serverfault.com/questions/1123895/what-are-examples-of-software-that-may-be-seriously-affected-by-a-time-jump).
Maybe an misunderstanding of ntpdate?
It can step, it can also slew, we like the latter.
ntpdate slews local clock when diff in less than 0.5sec.
If the local system has 0.5sec drift within 6hr period, then something is wrong with the system hardware.
If the local clock is that bad (.5s in 6hr), you can avoid stepping the clock by running ntpdate every 1-2 hour.
It not an issue with using ntpdate, it's an issue on how it is used on systems that have very bad clock drifts.
That Dovecot page didnt explain how ntpdate was being used that it had to step -2s. Sounds like 1) bad local clock, or 2) running ntpdate was not frequent enough.
I get why you might want to run a cron to do this, but is it really saving you anything? Even on my lowest power computers I'm not seeing spikes in CPU when it checks ntp servers. Maybe I don't load them enough to see the issue. I also don't see any advantage on my OPNsense, I keep it synced to my GNSS locked NTP server appliance.
The only benefit I can see is if there was an attack on time servers, spoofing a date far forward or far backward could cause real problems with licensing, file shares, secure email and secure web which all depend on "close enough" time sync.
I have seen some really bad internal clocks too, I had one old server that would lose a couple minutes in a day. This is the server that caused me to buy my first GPS NTP server. It also happened to be my Windows AD server which caused all sorts of issues on the domain. It was a good lower end Supermicro x7 series which makes this surprising.
I know that ntpdate does adjtime() when the difference is small. However, you did not say that you should run it every 1-2 hours initially, but "4x daily".
As for the accuracy of standard PC clocks, see this: https://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality/#AEN1230
I personally saw computer clocks having a drift of 50 PPM, which correspond to 4 seconds per day, with 20-30 PPM being a valid assumption for most clocks. You will find many reports of clock skews of > 10s per day.
Adding to this, ntpdate by design is unable to account for network latency and jitter, unlike ntpd or chrony. I also never saw any relevant load by those daemons.
ntpdate is the means of choice for a one-shot correction on system startup in case the CMOS battery is dead or the internal clock is off for other reasons - and most Linux distributions used it like that before systemd took over that task.
@Greg_E: The "attack" scenario can be ruled out more easily with a daemon, because it takes its time from a set of servers, which makes it more robust than a single call against a specific time server like ntpdate does it. There is also an election login that eliminates outliers build into ntp. This has led to confusion because initially after start, no server is trusted until that logic has settled.
I would normally have an internal ntpd that is my local authoritative time source, then all other devices/hosts would use that for time. I don't run a local ntpd server/appliance/device.
The concerns and issues listed are not to be discounted, but for my OPNsense fw using ntpdate 4x daily works good. If running it 24x daily is not enough to keep it in slew adjust, then perhaps use something else.
As for resources, on a small N100/150 device where resources are very limited, saving on resources is the game (give all resources to OPNsense stuff). We're not gonna see cpu or mem spikes when ntpd does a ntp sync, it's too small to recognize within the other hog processes that are running. An idle daemon still needs mem and kernel time to track it's state and manage it, etc. Don't need any of that by invoking a utility that runs and exits.
Different philosophy, it seems. I run ntpd on every system and point these to my local stratum 1 source. I also run snmpd on everything considered a server or "infrastructure". And lldpd. And collectd sending to my central influxdb. Etc.
A little off-topic, but i have observed quite some difference between the behavior of the chrony plugin vs. the standard "Network Time".
Network time ppears to make regular 'jumps' and shows some erratic behavior, whereas chrony does not.
Don't if you are familiar with the NTP Pool, but this is a graph of one of my servers participating in the pool.
Whows the measured offset over time. Every green dot is a measurement of offset. Some monitors are further away than others, so distance plays a role in the measurement results, but overall the dots are nicely grouped around the center line (0ms offset).
I'll try to post an example of NTPd also.
When you say "Network time ppears to make regular 'jumps' and shows some erratic behavior, whereas chrony does not.", are you referring to ntpd client mode from OPNsense gui vs using the chrony plugin solution?
And then, you mention pool.ntp.org, are you suggesting that source is better than others?
@brandywine
Yes indeed i am referring to ntpd client mode from Opnsense GUI vs. chrony plugin.
Quote from: Patrick M. Hausen on September 24, 2025, 06:00:55 PMDifferent philosophy, it seems. I run ntpd on every system and point these to my local stratum 1 source. I also run snmpd on everything considered a server or "infrastructure". And lldpd. And collectd sending to my central influxdb. Etc.
ntpd with udp and possibly tcp 123 listeners? Or do you mean configured to run "ntpd -q" ?
Not sure if this is a bug on OPNsense, but when I had ntpd -q configued in gui and a time server that was nist ipv4 name, I was getting error that said "unable to resolve name" for that ipv4-only nist server. Seems odd because name lookup from the fw for that nist server was ok. My WAN is ipv4 ipv6 dhcp enabled, so I wonder if the ntpd -q was trying to find AAAA record, of which the nist ipv4 server did not have. I then changed the nist server in gui to use a ipv6 server and after apply I did not get any resolve error. That's curious.
I then just tried ntpdate with nist ipv6 server name and was successful. At that point is when I ditched using ntpd and just cron'd ntpdate.
ntpd in regular operation as a local time source connected to one or multiple peers. So UDP 123, yes.
Actually in most of my locations I use the official time source of Germany - who encourage doing so - on one device, namely the Internet facing firewall, and then point all other servers at that. The official time source of course being 4 IPv4 and 4 IPv6 addresses and 4 different physical machines.
Internally then it's
server 192.168.1.1 iburst prefer
for every single server or VM or whatever runs on "Unix". Why not - this is how NTP was designed.
At home I have a stratum 1 because I was curious if these devices are any good (they are!):
https://centerclick.com/ntp/
Again common wisdom has been for decades to disable time sync from host to VM and run ntpd. That's what I'm doing everywhere - why not? ntpd does not use resources in an order of magnitude that would matter - at all. What do I gain by using your potentially less reliable method?
Quote from: Patrick M. Hausen on September 24, 2025, 07:33:47 PMntpd in regular operation as a local time source connected to one or multiple peers. So UDP 123, yes
Again common wisdom has been for decades to disable time sync from host to VM and run ntpd. That's what I'm doing everywhere - why not? ntpd does not use resources in an order of magnitude that would matter - at all. What do I gain by using your potentially less reliable method?
Let me clarify.
I meant you run ntpd as resident daemon? If so this create listener port(s) on all ifaces by default. If others are not pointing to that server for time, then you don't need to run ntpd as resident daemon. You would use ntpd -q instead, but as far as I know, -q is the "run as client" switch, and it tells ntpd to quit after it's done, so you still need cron or the like to invoke it.
If ntpd is resident daemon and suddenly a 0-day exploit comes out, then your "everything" is suddenly exposed to that. My point was, if the system does not need a resident daemon service with listeners on the nics, then it should not be there. Min resources or not, if the service is not needed then it should not be run at all. Resident ntpd is different than ntpd -q
I would perhaps prefer code for ntpd to have another switch, something like ntpd -f, to tell ntpd to run as resident daemon but without any listeners, so basically a true real-time ntp client. Yes, you can restrict queries in conf file (restrict default ignore , restrict 127.0.0.1), we can also restrict listener to lo iface, but this still has resource-using udp/tcp listeners
Wow, that's a lot of ntpd config to setup and manage, yet I basically get the same thing with a short ntpdate in cron, hence the "Easy" in thread title.
Does OPNsense fw have anything that will crash if (if) the time stepped back? Do we care if OPNsense box is not -1+ nanosec accurate?
Quote from: BrandyWine on September 24, 2025, 07:47:00 PMI meant you run ntpd as resident daemon?
Yes, of course. It keeps the server's time and skews the clock so the deviation becomes less and less the longer the runtime. Persistent across reboots, because the information is kept in the drift file.
Yes, it creates listening sockets. So?
server <my local server in my data centre> iburst
restrict <my local server in my data centre> noquery nomodify notrap nopeer
restrict 127.0.0.1 nomodify notrap nopeer
restrict -6 ::1 nomodify notrap nopeer
restrict default ignore
restrict -6 default ignore
Done. If you have 2 or 3 servers as you should, add all of them. That's how you do it. Since ... don't remember.
I use it as a definitive source for my network as well and distribute it via DHCP as the main NTP server to all local networks. This also would not work if only the local time of OpnSense was being synchronized via ntpdate.
Quote from: Patrick M. Hausen on September 24, 2025, 08:22:03 PMYes, of course. It keeps the server's time and skews the clock so the deviation becomes less and less the longer the runtime. Persistent across reboots, because the information is kept in the drift file.
It's just me peeve.... socket is the connection, listener is just a place where sockets are formed.
For clarity (again), time sync for clients. cron ntpd -q still does the same thing as a resident ntpd, it will load in, run it's routine, update drift, adjust skew, adjust time, yada yada yada, then quit.
You kinda proving my point, you need to manage that restrict policy. Certainly not an issue for small non-dynamic environments.
But your NTP server is main source for local queries, so why then would the local queries (hosts devices etc) need to run full ntpd (resident daemon)? They would only need to run as clients (ntpd -q, or other).
Resident ntpd, "time server" (unless all the restricts make "time server" query-not-capable).
ntpd -q , not a time server, acts as client only, and is not using resources like resident daemons do.
Certainly an argument to make is, why not just cron ntpd -q, which of course can be done too, but a little more config is needed.
Quote from: BrandyWine on September 24, 2025, 08:55:32 PMYou kinda proving my point, you need to manage that restrict policy. Certainly not an issue for small non-dynamic environments.
I am managing servers. Systems that run 24x7. Some of them hardware, some virtual. All of these run an ntpd ... because they can, because you run an sshd, an snmpd, ... etc.
ntpd -q exits. I want a
time keeping service on my 24x7 running servers. It's a basic function of a system IMHO.
Quote from: meyergru on September 24, 2025, 08:51:20 PMI use it as a definitive source for my network as well and distribute it via DHCP as the main NTP server to all local networks. This also would not work if only the local time of OpnSense was being synchronized via ntpdate.
You use the fw as NTP server?
And agree, a time server needs resident ntpd (or the like) to be a time server. The context here however is just time sync for client. I set my OPNsense to query NIST time server, using ntpdate. No issues.
Yes, because I want all machines in my network to have a definitive source, even if it were off by any amount of time. And I do not want to configure each client individually. OpnSense is pretty much the gateway for anyzhing internet-bound, be it as a reverse proxy, a central NTP or DNS server or an SMTP gateway.
With multiple VMs and containers, I lack the enthusiasm to configure all of them individually.
Quote from: Patrick M. Hausen on September 24, 2025, 09:01:52 PMntpd -q exits. I want a time keeping service on my 24x7 running servers. It's a basic function of a system IMHO.
Again, I think you missed the point I was making.
If you had ntpd that could run w/o setting up any listeners, then that would make for a good NTP client, and you would not have to worry about restrict config and the like. For clients, technically ntpd does not need to stay running all the time. For unpredictable remote shell access, sshd does need to stay running all the time, unless you have a strict access policy, say 8am-5pm, and in that case I would cron to start sshd at 759am and then cron to stop sshd at 501pm. Most setups are "lazy", just leave everything running, even when it's not needed.
In your setup (for just your clients), I would cron ntpd -q to run hrs1 mins9.
Quote from: BrandyWine on September 24, 2025, 09:02:14 PMYou use the fw as NTP server?
Of course!
The firewall is the only system with a "full" Internet connection without NAT - at least for IPv4. And the system with the lowest latency, because it is connected directly to the uplink.
For a typical (non enterprise, redundant, super heavy ...) office network you run
- router (obviously)
- NAT (equally obvious)
- recursive DNS
- NTP
- DHCP (depends, maybe that goes to your Windows DC)
on the firewall system where it belongs. I have four decades of experience, consulted government agencies, am quite well known in the field - and I do not understand the question. What are you hinting at?
The firewall/router/NAT thingy is providing "Internet" to the office network. That means all services needed for "Internet" including a time source.
Checkpoint
Sidewinder
McAfee
...
Of course you do. It's the best place for the time service unless your network is way larger and you have multiple dedicated time serves, DNS servers etc.
Quote from: meyergru on September 24, 2025, 09:09:36 PMYes, because I want all machines in my network to have a definitive source, even if it were off by any amount of time. And I do not want to configure each client individually. OpnSense is pretty much the gateway for anything internet-bound, be it as a reverse proxy, a central NTP or DNS server or an SMTP gateway.
With multiple VMs and containers, I lack the enthusiasm to configure alle of them individually.
Yes, but I would not make the fw the server. fw should just be a client. fw should just be doing fw. "UTM" was buzz word like "cloud" was. I myself want my fw to do fw work, and that's it. You could just install another free opnsense on a mini pc and use that on the LAN to provide all of the DHCP DNS NTP etc etc.
Quote from: BrandyWine on September 24, 2025, 09:19:42 PMfw should just be a client.
That does make sense. In a large data centre deployment I would agree with that. But in most setups the firewall is the Internet access server. And it runs all services necessary for that. Which other system is more qualified, more hardened, better monitored ... to do that?
Quote from: Patrick M. Hausen on September 24, 2025, 09:16:54 PMFor a typical (non enterprise, redundant, super heavy ...) office network you run
- router (obviously)
- NAT (equally obvious)
- recursive DNS
- NTP
- DHCP (depends, maybe that goes to your Windows DC)
on the firewall system where it belongs.
What? That's NOT where those services (dhcp, ntp, dns) belong. That's the "UTM" model, and it's not a good one. You are hinting of a best practice to not allow hosts to reach out to internet directly, which is good sec practice. But never run non-fw services on a public facing fw device,..... if you want to be secure. If the OPNsense fw image can provide all the UTM services you need, then just install a 2nd one on the LAN, run all the non-fw stuff there. Sec-101.
If I were to do that, I would not use OpnSense as the router/firewall at all. OpnSense's value lies in that it can do most fo the "gateway" work for sake of it being a full Unix-like machine. If I wanted to separate all of the services like DNS, DHCP and such out, there sure were some more specialised appliances for that, but frankly, those are more or less applicable in enterprise-grade installations, like Patrick said.
Then again, for small and medium businesses and home lab users, OpnSense alone does the trick and why should I convolute a fully working setup by using two OpnSense installations?
Also, for cloud-based setups, OpnSense does the trick on a VM host like Proxmox, where you cannot use any more specialised appliance or where you do not want to pay for that to happen.
But that discussion is way besides the point. YMMV, but for me, if I use OpnSense as the sole router, I still do not think using periodic ntpdate is a good idea, because:
1. If you want to have OpnSense as the LAN NTP server, it is outright impossible to use it like that.
2. I do not see any real performance benefit, because even on limited platforms, there is not much load with NTPD or Chrony.
3. There is a risk to have network services die or otherwise misbehave, should your time jump or - worse - even jump backwards. This is documented and I experienced it personally.
No need to fight back, you can do whatever you like. I just would not do it that way, period.
UTM is the current model. Debating that goes into the realm of philosophy or strong opinions which does not lead us any further on this platform. I run all my OPNsense firewalls as UTMs. If I did not I would pick a different product. The value of OPNsense is exactly that it can run all the essential infrastructure services. A layer 3 switch can do "firewalling" without anything else. Just a pain to manage.
Quote from: meyergru on September 24, 2025, 09:30:38 PMIf I were to do that, I would not use OpnSense as the router/firewall at all. OpnSense's value lies in that it can do most fo the "gateway" work for sake of it being a full Unix-like machine. If I wanted to separate all of the services like DNS, DHCP and such out, there sure were some more specialised appliances for that, but frankly, those are more or less applicable in enterprise-grade installations, like Patrick said.
Then again, for small and medium businesses and home lab users, OpnSense alone does the trick and why should I convolute a fully working setup by using two OpnSense installations?
It's the balance between bad model UTM and better security.
You can have "UTM" with all the services bundled on one device, just as they are available from free OPNsense, but install that onto a 2nd mini pc. It's no different than starting up a new host/server that's not a vm so those services are always available. Plus, you'll be taking abuse off the actual fw NVMe SSD, so it will last longer. ;)
Not very convoluted at all.
Quote from: Patrick M. Hausen on September 24, 2025, 09:41:37 PMUTM is the current model. Debating that goes into the realm of philosophy or strong opinions which does not lead us any further on this platform. I run all my OPNsense firewalls as UTMs. If I did not I would pick a different product. The value of OPNsense is exactly that it can run all the essential infrastructure services. A layer 3 switch can do "firewalling" without anything else. Just a pain to manage.
Yes, choosing the UTM model sacrifices security. So easier to manage, less security. There's no argument to be had there.
Installing another OPNsense as UTM on the lan still provides the services you need, and access to those services has fw in front of them. You can then take out all of the non-fw services from actual fw, leaving actual fw to do fw-only work. So you still have all the UTM services w/o sacrificing edge security.
Thus far we're just talking about the non-sec stuff (dns ntp dhcp, etc). We still have vpn, ids/ips, etc. I prefer those services to run somewhere else too, many times in a dmz behind fw. But, there's "UTM" again to save everyone, just run fw,vpn,ids-ips on one device. It's just cruddy to take down inet access when an issue in vpn or ids requires a reboot of fw.
"UTM" used to be only sec services related (fw, vpn, ids-ips, etc). Now it has morphed into sec and non-sec services. Again, ok for mgmt reasons, but sacrifices sec risk. Layered security is and has always been a better model than UTM.
Obviously I am no fan of UTM, at least not when deployed as edge security.
Anyways, cron ntpdate, or ntpd -q with some config, keep time good. I took the easy route that works. ;)
Two machines = more firewall rules to redirect traffic to a second machine, two configurations to backup, twice the hardware or multiple VMs (which I dislike for FW appliances for security reasons) ... need I say more?
We are not talking security architectures for large enterprises or high-value targets like banks (which I have developed myself) where you use redundant systems from different vendors to rule out specific platform weaknesses than UTM systems could have.
BTW: If that were the target, two OpnSenses were a bad choice (tm).
Again, besides the point and purely philosophical and has little to do with the initial approach.
Quote from: meyergru on September 24, 2025, 10:08:25 PMTwo machines = more firewall rules to redirect traffic to a second machine, two configurations to backup, twice the hardware or multiple VMs (which I dislike for FW appliances for security reasons) ... need I say more?
Yep, just two, basically identical, one with non-sec services, they other w/o. Not that hard.
Rules for NTP, DNS, DHCP? So maybe 5 inbound access rules on LAN port. You could also just create
one rule to allow local nets access to the local non-sec services. DHCP would hand out settings for 2nd device LAN IP for DNS NTP, etc. Nothing really to configure at all, and, you can then manage those services w/o worrying about touching the edge fw. I always cringe when I hear admin say "
hold on a sec, I need to update a dns setting, let me ssh into the fw to do that". Another benefit, when new version of OPNsense come out you can install it on the 2nd device 1st, if that goes well then install on actual fw. Update NVM on 2nd device 1st, try out new ZFS setting on 2nd device 1st.
Shall we visit the versions forums, "I upgraded and now it's hosed, unbound not working, won't boot, upgrade is stuck in infinite loop".
I can Pros & Cons all day. ;)
The 2nd device is not gonna be doing much routing of anything. Client services stay on LAN port, any outbound traffic will be from the device itself (updates, dns query, getting NTP sync update, etc).
All of this discussion changes if you move into PTP, no local clocks are solid enough to keep the tolerance over a several hour period. And PTP is another security measure that some places are rolling out, if you aren't on time, you get no access to the requested resource. Also becoming the default sync for audio and video over IP. SMTPE ST2110 leverages PTP heavily, not for security but for signal sync. This replaces things like BlackBurst and Tri-level sync and since it's adaptive, it's more accurate and finer grained too.
Kind of of topic, but something people in IT should be thinking about incase they ever work on system at a TV station. https://blogs.cisco.com/industries/take-your-st-2110-workflow-to-the-next-level