Timezones; sure. But what about before timezones got into use? Or even halfway through - which timezone, considering Königsberg used CET when it was part of Germany, but switched to EET after it became Russian. There's even countries that have timezones differenting by 15 minutes.
And dont get me started on daylight savings time. There's been at least one instance where DST was - and was not - in use in Lebanon - at the same time! Good luck booking an appointment...
Not to mention the transition from Julian calendar to Gregorian, which took place over many, many years - different by different countries - as defined by the country borders at that time...
We've even had countries that forgot to insert a leap day in certain years, causing March 1 to occur on different days altogether for a couple of years.
Time is a mess. Is, and aways have been, and always will be.
Messy.
I think the full list can be found here: https://www.timeanddate.com/time/time-zones-interesting.html
You can use a Bash script that can give you an exhaustive list based on files from /usr/share/zoneinfo/, i.e. find timezones with non-whole hour offsets.
Why do such time zones exist?
For example in terms of India, they had two timezones before they adopted a compromise: UTC+5:30.
Nepal uses UTC+5:45, partly to distinguish itself from Indian Standard Time, reinforcing national identity.
Truly, a compromise is when nobody is happy. ._\
# From Roozbeh Pournader (2025-03-18):
# ... the exact time of Iran's transition from +0400 to +0330 ... was Friday
# 1357/8/19 AP=1978-11-10. Here's a newspaper clip from the Ettela'at
# newspaper, dated 1357/8/14 AP=1978-11-05, translated from Persian
# (at https://w.wiki/DUEY):
# Following the government's decision about returning the official time
# to the previous status, the spokesperson for the Ministry of Energy
# announced today: At the hour 24 of Friday 19th of Aban (=1978-11-10),
# the country's time will be pulled back half an hour.
From https://github.com/eggert/tz/blob/main/asia#L1503.Pretty sure we can find a lot more oddities that are way worse.
An IANA timezone uniquely refers to the set of regions that not only share the same current rules and projected future rules for civil time, but also share the same history of civil time since 1970-01-01 00:00+0. In other words, this definition is more restrictive about which regions can be grouped under a single IANA timezone, because if a given region changed its civil time rules at any point since 1970 in a a way that deviates from the history of civil time for other regions, then that region can't be grouped with the others
I agree that time is a mess. And the 15 minute offsets are insane and I can't fathom why anyone is using them. % zdump -i Europe/Warsaw | head
TZ="Europe/Warsaw"
- - +0124 LMT
1880-01-01 00 +0124 WMT
1915-08-04 23:36 +01 CET
1916-05-01 00 +02 CEST 1
1916-10-01 00 +01 CET
1917-04-16 03 +02 CEST 1
1917-09-17 02 +01 CET
1918-04-15 03 +02 CEST 1
% zdump -i Europe/Kaliningrad | head -20
TZ="Europe/Kaliningrad"
- - +0122 LMT
1893-03-31 23:38 +01 CET
1916-05-01 00 +02 CEST 1
1916-10-01 00 +01 CET
1917-04-16 03 +02 CEST 1
1917-09-17 02 +01 CET
1918-04-15 03 +02 CEST 1
1918-09-16 02 +01 CET
1940-04-01 03 +02 CEST 1
1942-11-02 02 +01 CET
1943-03-29 03 +02 CEST 1
1943-10-04 02 +01 CET
1944-04-03 03 +02 CEST 1
1944-10-02 02 +01 CET
1945-04-02 03 +02 CEST 1
1945-04-10 00 +02 EET
1945-04-29 01 +03 EEST 1
1945-10-31 23 +02 EET
%
https://en.wikipedia.org/wiki/Daylight_saving_time_in_Morocc...
Calculating Ramadan is something software packages most likely won't do. To get the visibility of the moon correct, I think you have to know the exact location - and maybe check, if there is a mountain obstructing the view.
What they did instead was to "smear" it across the day, by adding 1 / 86400 seconds to every second on 31st Dec. 1/86400 seconds is well within the margin of error for NTP, so computers could carry on doing what they do without throwing errors.
Edit: They smeared it from noon before the leap second, to the noon after, i.e 31st Dec 12pm - 1st Jan 12pm.
http://rachelbythebay.com/w/2025/01/09/lag/
That was probably at Move Fast And Break Things Corp, instead of We Used To Be Do No Evil Inc.
- system clock drift. Google's instances have accurate timekeeping using atomic clocks in the datacenter, and leap seconds smeared over a day. For accurate duration measurements, this may matter.
- consider how the time information is consumed. For a photo sharing site the best info to keep with each photo is a location, and local date time. Then even if some of this is missing, a New Year's Eve photo will still be close to midnight without considering its timezone or location. I had this case and opted for string representations that wouldn't automatically be adjusted. Converting it to the viewer's local time isn't useful.
If I'm in Sydney and I accept a 4pm meeting in 3 weeks time, say 4pm July 15 2025 in San Francisco, how should my calendar store that event's datetime and how does my calendar react to my phone changing locations/timezones?
And now try and work that out if the standard/summertime changeover happens between when the event is created in one timezone and the actual time the event (is supposed to) occur. Possibly two daylight savings time changes if Sydney goes from winter to summer time and San Francisco goes from summer to winter time - and those changeovers don't happen at the same time, perhaps not even the same week.
When you change locations and you have your calendar configured to show events in "the" timezone of your location, it does so. And should there be no clear timezone, it should ask you.
Very simple problem and simple solutions. There are much harder problems imho.
As you can see, the summertime change does it even matter here.
It's a 4pm Tuesday meeting. I want it to show as 4pm while I'm in Sydney, 4pm while I'm on a stopover in Hawaii, and correctly alert me for my 4pm meeting when I'm in San Francisco. And it probably should alert me at 4pm San Francisco time even if I'm not there, in case I missed my connecting flight in Hawaii and I want to call in at the correct time. And that last requirement conflicts wit the "I want it to show as 4pm while I'm on a stopover in Hawaii" requirement, because I'm human and messy and I want the impossible without expending any effort to make it happen.
I'm pretty sure there is no "simple solution" for getting the UX right so I can add a meeting in San Francisco on my phone while I'm in Sydney, and have it "just work" without it always bugging me by asking for timezones.
Ultimately, if you don't like it, tell your calendar to show it differently.
> It's a 4pm Tuesday meeting
In one timezone, yes.
Apperently you want the times to be shown for when you will be in that timezone. But the calendar doesn't know when you will be in what timezone, and it's such a rare thing that apparently no one made a calendar where you can day "I'll be (mentally) in this timezone from that day and then in that timezone a week later".
So you, your last sentence is right, because that's impossible. That's different from "hard".
Confusion about special and general relativity accounts for almost none of the problems that programmers encounter in practice. If that's your use case, then fine, time is special and tricky.
The most common issue is a failure to separate models vs. view concepts. e.g. timestamps are a model, but local time, day of the week, leap seconds are all view concepts. The second most common issue is thinking that UTC is suitable to use as a model, instead of the much more reasonable TAI64. After that it's probably the difference between scheduling requests vs. logging what happened. "The meeting is scheduled next Wednesday at 9am local time" vs. "the meeting happened in this time span". One is a fact about the past, and the other is just scheduling criteria. It could also be something complicated like "every other Wednesday", or every "Wednesday on an even number day of the month". Or "we can work on this task once these 2 machines are available", "this process will run in 2 time slots from now", etc.
(note that for the short intervals, when a single second actually matters, you should use neither TAI nor UTC, use monotonic timer provided by your OS)
This shouldn't affect code complexity at all. None of this logic belongs in the application code. You take your timestamp stored as state and convert it to a presentation format by calling a library in both cases. Converting timezones yourself by adding or subtracting hours is asking for trouble.
So that's why I always advocate for UTC, ideally with smeared leap seconds (although this point is kinda moot now). The 0.00000001% of people who actually care about leap seconds can use the library to get TAI.
Heck you can’t even model “Oct 15 2030 at 2pm” as a timestamp for the same reason if it is an appointment involving two humans in the same tz.
Somewhere you need to store the rule using local time concepts.
It's worth repeating over and over: "Only use local time if the actual moment event happens depends on the timezone, like during future calendar entries. Every other time in the system should be a utc timestamp, converted to local time string at the very last presentation layer"
> Somewhere you need to store the rule using local time concepts.
The scheduling rules can be in terms of whatever you want and maybe that's local time or a timezone or soonest free block, but they are being used to solve for timestamps in the actual schedule. You run into problems if the scheduler output is in terms of any of those concepts.
Another fun aspect of this topic is how people will tacitly jump around the definition of "day" being when the sun is up without confronting that that is, itself, not a given. And, worse, is completely thrown out if you try to abolish time zones.
I know you had to limit the length of the post, but time is an interest of mine, so here's a couple more points you may find interesting:
UTC is not an acronym. The story I heard was the English acronym would be "CUT" (the name is "coordinated universal time") and the French complained, the French acronym would be "TUC" and the English-speaking committee members complained, so they settled for something that wasn't pronouncable in either. (FYI, "ISO" isn't an acronym either!)
Leap seconds caused such havoc (especially in data centers) that no further leap seconds will be used. (What will happen in the future is anyone's guess.) But for now, you can rest easy and ignore them.
I have a short list of time (and NTP) related links at <https://wpollock.com/Cts2322.htm#NTP>.
I see someone else is a Vernor Vinge fan.
But it's kind of a wild choice for an epoch, when you're very likely to be interfacing with systems whose Epoch starts approximately five months later.
I know they exist, but I would say those are niche. And even for applications where you can't just use UTC for everything and be done with it, the vast majority of timestamps will be UTC. You'll have one or a few special cases where you need to do more fancy stuff, and for everything else you just use utc.
> I know they exist, but I would say those are niche.
I firmly disagree with this, but I think that is because I think timestamps are very different than dates. Two examples my team runs into frequently that I think are very common:
1. Storing the time of an event at a physical location. These are not timestamps and I would never want to convert them to a different time zone. We had google calendar trying to be smart and convert it to user's local time because it was stored as a timestamp, but it is not a timestamp. I don't care where the user lives, they will need to show up Jan 2nd at 3pm. Period. I hate when tools try to auto-convert time zones 2. Storing "pure dates" (e.g. birthdays). The database we use does not allow this and we have to store birthdates in UTC. This is an abomination. I've seen so many bugs where our date libraries "helpfully" convert the time zone of bitthdays and put them a day before they actually are.
Storing UTC may solve almost all timestamp problems, but timestamp problems are a pretty narrow slice of date and time related bugs.
And the reason I feel the need to say this is that most systems I've worked on don't do that. They use local time for things that have no business being local time.
UTC should be the default solution, and local datetime usage should be a solution in the few situations where its needed.
And yeah, dateonly is nice. If the db doesn't support it you can just store it as a string and serialize/deserialize it to a DateOnly type in your software.
For internal timestamps such as ordering events in a database, them UTC or something similar is nice. Bet the point then is that those values are not really meaningful or important in the analog world.
Basically if you want to preserve the original input then obviously you won't change it. If you just want to record an instant in time you use UTC.
But I hate how when I stack my yearly weather charts, every four years either the graph is off by one day so it is 1/366th narrower and the month delimiters don't line up perfectly, or i have to duplicate Feb 28th so there is no discontinuity in the lines. Still not sure how to represent that, but it sure bugs me.
Well, how do we know what timezone is "2026-06-19 07:00" in, to be able to know that the time rules for that timezone have changed, if we do not store the timezone?
Additionally, how do we really "detect that the time rules for that timezone have changed"? We can stay informed, sure, but is there a way to automate this?
The website uses their system timezone America/Los_Angeles. It would be better to store that as well, but it's probably not much of an issue if the website is dedicated to meetings in that specific locale.
> Additionally, how do we really "detect that the time rules for that timezone have changed"? We can stay informed, sure, but is there a way to automate this?
My first attempt would be to diff the data between versions. If a diff of 2025b against 2025a has added/removed lines which include America/Los_Angeles, you recompute the timestamps. This, of course, requires that the library support multiple timezone database versions at once.
Umm what? In Unix time some values span two seconds, which is the crux of the problem. In UTC every second is a proper nice SI second. In Unix time the value increments every one or two SI seconds.
https://en.wikipedia.org/wiki/Unix_time#Leap_seconds
From there you can clearly see that e.g. Unix time 915148800 lasted two seconds
We can make an analogy to leap days:
- UTC is like Gregorian calendar, on leap years it goes Feb28-Feb29-Mar1 (23:59:59-23:59:60-00:00:00)
- TAI would be just always going from Feb28-Mar1 (23:59:59-00:00:00) and ignoring leap years
- Unix time would be like to go Feb28-Mar1-Mar1 (23:59:59-00:00:00-00:00:00) on leap years, repeating the date
From this it should be pretty obvious why I consider Unix time so bonkers.
So in fact, unix seconds can be longer than intuitively expected. Which also means two timestamps of e.g. UTC with different seconds can map to the same unix timestamps.
My guess is that with the increasing dependency on digital systems for our lives the edge-cases where these rules aren't properly updated cause increased amounts of pain "for no good reason".
In Brazil we recently changed our DST rules, it was around 2017/2018. It caused a lot of confusion. I was working with a system where these changes were really important, so I was aware of this change ahead of time. But there are a lot of systems running without too much human intervention, and they are mostly forgotten until someone notices a problem.
The standard name for durations in physics are "periods" or 'uppercase T' ('lowercase t' being a point in time), which curiously enough are the inverse of a frequency (or the frequency is the inverse of). A period can also be thought of as an interval [t0,t1] or inequality t0<=T<=t1
> The concept of "absolute time" (or "physical/universal time") refers to these instants, which are unique and precisely represent moments in time, irrespective of concepts like calendars and timezones.
Funnily enough, you mean the opposite. An absolute time physically does not exist, like an absolute distance, there is no kilometer 0. Every measurement is relative to another, in the case of time you might use relative to the birth of (our Lord and saviour) Jesus Christ. But you never have time "irrespective" of something else, and if you do, you are probably referring to a period with an implicit origin. For example if I say a length of 3m, I mean an object whose distance from one end to the other is 3m. And if I say 4 minutes of a song, I mean that the end is 4 minutes after the start, in the same way that a direction might be represented by a 2D vector [1,1] only because we are assuming a relationship to [0,0].
That said, it's clear that you have a lot of knowledge about calendars from a practical software experience of implementing time features in global products, I'm just explaining time from the completely different framework of classical physics, which is of course of little use when trying to figure out whether 6PM in Buenos Aires and 1 PM in 6 months in California will be the same time.
Software engineers understandably don't like that, so Unix time handles it instead by "going backwards" and repeating the final second. That way every minute is 60 seconds long, every day is 86400, and you're only at risk of a crazy consistency bug about once every year and a half. But most databases do it differently, using smearing. Many databases use different smearing windows from one another (24 hours vs 20 for instance). Some rarer systems instead "stop the clock" during a leap second.
That's 4 different ways to handle a leap second, but much documentation will use terms like "UTC" or "Unix time" interchangeably to describe all 4 and cause confusion. For example, "mandating UTC for the server side" almost never happens. You're probably mandating Unix time, or smeared UTC.
If you care about sub-second differences, you likely run your own time infra (like Google Spanner), and your systems are so complex already that the time server is just a trivial blip.
If you are communicating across org boundaries, I've never seen sub-second difference in absolute time matter.
It makes a lot of sense until you realize what we're doing. We're just turning UTC into a shittier version of TAI. After 2035, they will forevermore have a constant offset, but UTC will keep it's historical discontinuities. Why not just switch to TAI, which already exists, instead of destroying UTC to make a more-or-less redundant version of TAI?
I'm surprised anything works at all just from what I know.
TAI provides a time coordinate generated by taking the weighted average of the proper times of 450 world lines tracked by atomic clocks. Like any other time coordinate, it provides a temporal orientation but no time coordinate could be described as "universal" or "linear" in general relativity. It would be a good approximation to proper time experienced by most terrestrial observers.
Note that general relativity doesn't add much over special relativity here (the different atomic clocks will have different velocities and accelerations due to altitude and so have relative differences in proper time along their world lines). If you already have a sufficiently general notion of spacetime coordinates, the additional curvature from general relativity over minkowski space is simply an additional effect changing the relation between the coordinate time and proper time.
This article explains it really well. The part about leap seconds especially got me. We literally have to smear time to keep servers from crashing. That’s kind of insane.
Everyone should use TAI as their fundamental representation. TAI has no leap seconds. It's way easier to convert from TAI to UTC than vice versa. You can still easily present all your timestamps in UTC when printed as a string.
NTP servers are generally synced up to GPS signals, which already use a version of TAI for their time signals. So an NTP server will take a perfectly good TAI time signal and do a smearing conversion to something that looks more like UTC (but isn't, because a true UTC clock would occasionally have a 61-second minute instead of smearing). Then someone never fails to freak out about the leap seconds because we have this oversimplified time abstraction that encourages you to ignore them. And instead of realizing they made a mistake in recommending UTC as the fundamental representation, BIPM is doubling down and is about to eliminate leap seconds entirely, so UTC will become just a worse version of TAI (because it will still have cause historical discontinuities with most software systems, but also drift away from solar time). I'm kinda pissed about it
Where practical I schedule them around 12:00 (but I'm sure one day I'll get stung but some odd country who chooses to implement their daylight savings changeover in the middle of the day).
https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b...
In a nutshell if you believe anything about time, you're wrong, there is always an exception, and an exception to the exception. And then Doc Brown runs you over with the Delorean.
Instead I mostly use time for durations and for happens-before relationships. I still use Unix flavor timestamps, but if I can I ensure monotonicity (in case of backward jumps) and never trust timestamps from untrusted sources (usually: another node on the network). It often makes more sense to record the time a message was received than trusting the sender.
That said, I am fortunate to not have to deal with complicated happens-before relationships in distributed computing. I recall reading the Spanner paper for the first time and being amazed how they handled time windows.
Things like timestamps are used to track things like file creation, transaction processing, and other digital events.
As computers and networks have become increasingly fast, the accuracy of the timestamps becomes more and more critical.
While the average human doesn't care if a file was created at a time calculated down to the nanosecond; it is often important to know if it was created before or after the last backup snapshot.
I'm bookmarking this article to hand out to new developers.
"Wanna grab lunch at 1,748,718,000 seconds from the Unix epoch?"
I'm totally going to start doing that now.
Ooh, this is a really interesting topic!
Okay, so the first thing to keep in mind is that there are three very important cyclical processes that play a fundamental role in human timekeeping and have done so since well before anything we could detect archaeologically: the daily solar cycle, the lunar cycle (whence the month), and the solar year. All of these are measurable with mark 1 human eyeballs and nothing more technologically advanced than a marking stick.
For most of human history, the fundamental unit of time from which all other time units are defined is the day. Even in the SI system, a second wasn't redefined to something more fundamental than the Earth's kinematics until about 60 years ago. For several cultures, the daylight and the nighttime hours are subdivided into a fixed number of periods, which means that the length of the local equivalent of 'hour' varied depending on the day of the year.
Now calendars specifically refer to the systems for counting multiple days, and they break down into three main categories: lunar calendars, which look only at the lunar cycle and don't care about aligning with the solar year; lunisolar calendars, which insert leap months to keep the lunar cycle vaguely aligned with the solar year (since a year is about 12.5 lunations long); and solar calendars, which don't try to align the lunations (although you usually still end up with something akin to the approximate length of a lunation as subdivisions). Most calendars are actually lunisolar calendars, probably because lunations are relatively easy to calibrate (when you can go outside and see the first hint of a new moon, you start the new month) but one of the purposes of the calendar is to also keep track of seasons for planting, so some degree of solar alignment is necessary.
If you're following the history of the Western calendrical tradition, the antecedent of the Gregorian calendar is the Julian calendar, which was promulgated by Julius Caesar as an adaptation of the Egyptian solar calendar for the Romans, after a series of civil wars caused the officials to neglect the addition of requisite leap months. In a hilarious historical example of fencepost errors, the number of years between leap years was confused and his successor Augustus had to actually fix the calendar to have a leap year every 4th year instead of every third year, but small details. I should also point out that, while the Julian calendar found wide purchase in Christendom, that didn't mean that it was handled consistently: the day the year started varied from country to country, with some countries preferring Christmas as New Years' Day and others preferring as late as Easter itself, which isn't a fixed day every year. The standardization of January 1 as New Years' Day isn't really universal until countries start adopting the Gregorian calendar (the transition between Julian and Gregorian calendar is not smooth at all).
Counting years is even more diverse and, quite frankly, annoying. The most common year-numbering scheme is a regnal numbering: it's the 10th year of King Such-and-Such's reign. Putting together an absolute chronology in such a situation requires accurate lists of kings and such that is often lacking; there's essentially perennial conflicts in Ancient Near East studies over how to map those dates to ones we'd be more comfortable with. If you think that's too orderly, you could just name years after significant events (this is essentially how Winter Counts work in Native American cultures); the Roman consular system works on that basis. If you're lucky, sometimes people also had an absolute epoch-based year number, like modern people largely agree that it's the year 2025 (or Romans using 'AUC', dating the mythical founding of Rome), but this tends not to be the dominant mode of year numbering for most of recorded human history.