It's an amazing time to be alive. While not this precise, you can have atomic cesium beam clocks of your own for a few thousand dollars each, and some elbow grease.
Assuming my math is correct, it's already affected by nearby human scale masses, for certain values of "near".
However, according to that logic, an object located in a cavity in the center of earth should experience no more dilation than an object outside the earth's potential well, because the gravitational forces / curvature gradient cancels out, and should be zero. But that isn't the case according to the same sources, for example, Wikipedia says' "Relative to Earth's age in billions of years, Earth's core is in effect 2.5 years younger than its surface."
Something's not right about how we verbalize this story about gravity
To an observer at the infinity, a clock at the core of the Earth will tick slower than a clock on the surface of the Earth because the "core clock" is sitting in a more curved space, and that's it.
The difference between the clock on the surface of the Earth and the clock at the core is that the surface clock can't follow the "straight lines" (geodesics) in that curved space. So it experiences acceleration due to the force of inertia. And the thing preventing that movement is the repulsive force between atoms that make up the bulk of the Earth.
If this repulsive force magically disappears, then the Earth's atoms will immediately start moving at the straight lines, in trajectories that will lead them all into a point at the center of the Earth.
To add: the force of inertia due to moving in curved lines instead of geodesics depends on the "steepness" of the curved space. Which decreases as you reach the center of the Earth. So you get essentially the same result as with the classic Newtonian gravity, but through an entirely different path.
no net force, but net potential energy - thus gravitational dilation
https://apps.dtic.mil/sti/pdfs/AD1012150.pdf
Though, that 1989 paper concludes that because gravimeters would need a sensitivity of at least one part in 10^13 for practical usage, far beyond what was capable at the time, "[t]he concept of detecting submarines by means of detecting gravitational anomalies they produce, should be abandoned."
No.
If not, it'd make for a pretty cool plot device if done well.
How hard or expensive would it be for a reasonably equipped lab to build their own optical clock though? I see there are optical clocks the size of few rack units on the market for a rather hefty price, are the materials needed that expensive or is it just the expertise?
Oh and to know if it's any good you have to either build two (ideally more) of them to compare against each other (ideally using different approaches so their errors are less correlated), or have access to a clock better than the one you're building to compare to. So you can rarely get away with building just one if you want to know if you've succeeded.
Source: I work on the software for these portable optical clocks: https://phys.org/news/2025-07-quantum-clocks-accuracy-curren...
Once optical comb sources are commoditized to the extent that solid-state lasers are now, a lot of fun stuff will become possible.
Re the commoditization: Part of the problem is that customers, especially the scientific ones, don't want "commodity" frequency combs. Nearly every comb we sell is tailored to the specific customer in one way or another.
Industrial customers start to be interested in frequency combs more and more. I guess this will be the clientele that values off-the-shelf products more, eventually paving the way for commoditization.
Satellite, ACES, was launched recently that uses atomic clocks to accurately measure Earth's gravity field.
On second thought, you need a base station on the ground to tell you its time for comparison anyway, so if that base station is nearby the density thing should mostly work itself out
In what amount of time? Not instantly, right?
[0] https://sci-hub.se/https://doi.org/10.1126/science.1192720 ("Optical Clocks and Relativity" (2010))
But you would need a more precise characterization of the clock to answer this.
There might be significant noise on individual measurements, meaning that you need to take multiples to get precise enough (see https://en.wikipedia.org/wiki/Allan_variance).
Edit: If you just have clock output in ticks, you also need enought time to elapse to get a deviation of at least one tick between both bot clocks you are comparing. This is a big limitation, because at a clock rate of 1GHz you are still waiting for like 30 years (!!). (In practice you could probably cheat a bit to get around this limit)
In practice with this level of precision you are usually measuring the relative phase of the two clocks, which allows substantially greater resolution than just looking at whole cycles, which is 'cheating' to some degree, I guess. (The limit is usually how noisy your phase measurement is)
(To give some intuition, imaging comparing two pendulum clocks. I think you can probably see how if you take a series of pictures of the pendulums next to each other you could gauge whether one of them is running fast relative to the other, and by how much, without one completing one full swing more than the other)
This improves the clock’s stability, reducing the time required to measure down to the 19th decimal place from three weeks to a day and a half.
So no, not instantly.It takes a longer measurement to be more confident.
So then the question has to be asked, does the effect really happen instantly? Or do the same mechanisms that impose an inverse relationship between bandwidth and SNR mean that, in fact, it doesn't happen instantly at all?
Nevertheless, in order to measure a frequency difference between two optical clocks you do not need to count their signals. The optical signals can be mixed in a non-linear optical medium, which will provide a signal whose frequency is equal to the difference between the input frequencies.
That signal might have a frequency no greater than 1 GHz, so it might be easy to count with a digital counter.
Of course, the smaller the frequency difference is, the longer must be the time used for counting, to get enough significant digits.
The laser used in this clock has a frequency around 200 THz (like for optical fiber lasers), i.e. about 2E14 Hz. This choice of frequency allows the use of standard optical fibers to compare the frequencies of different optical clocks, even when they are located at great distances.
Mixing the light beams of 2 such lasers, in the case of a 1E-17 frequency difference would give a difference signal with a period of many minutes, which might need to be counted for several days to give an acceptable precision. The time can be reduced by a small factor selecting some harmonic, but it would still be of some days.
Let's imagine that there is a huge amount of time dilation (we live on the surface of a neuron star or something). By climbing a bit, we experience 1.1 seconds instead of 1.0 seconds experienced by someone who left down.
We have a clock that can measure milliseconds as the smallest tick. But climbing up, back down, and comparing the amount of ticks won't let us conclude anything after a single millisecond. If anything, we must spend at least 11 milliseconds up to have a noticeable 11 to 10 millisecond difference.
Now, if the dilation was 1.01 seconds vs 1.00, we would need to spend at least 101 milliseconds up, to get a minimal comparison between 101 and 100 milliseconds.
That idea is the premise of https://en.wikipedia.org/wiki/Incandescence_(novel)
I'm not above buying a toy to look at logs and geek out over it, but I can't justifying spending several grand for it.
Just something very cool about the idea of a hyper-accurate clock living in my house. I don’t know what I would do with it, just that it would be neat.
If I ever become a billionaire or something, I will absolutely buy one. Sadly I don’t think that’s likely to happen any time soon.
How can anything … you know what? Never mind. No matter what answer anyone provides, I won’t understand.
Yes
> How can anything …
So your cesium counting device will fauthfully provide such a count and depending on their altitude it will be at different rates.
Both clocks are each experiencing time at the usual one second per second but gravity dilates spacetime.
Locally, a second is always a second, but from everywhere there is no such asbsolute, just as there is no universal "now".
I think this new clock is simply able to generate more precisely spaced ticks than those of a traditional Cs clock. Less jitter and variation in the timing of those ticks. Similar to how a one-hour water clock or sand timer's runtime will vary between "transitions", but a one-hour quartz stopwatch timer is much more regular. I could keep going, but I'm already out on a limb so I'll stop before my own uncertainty rises too much.
(Edit: I read the article. I don't think my words above are correct.)
I propose calling it TIGO(Time Interferometer Gravitational-Wave Observatory) ;-)
I think.
I briefly worked at NOAA, on this same campus, and I loved walking around NIST. Such a cool building. The entire campus is at risk -> https://www.cpr.org/2025/07/01/proposed-noaa-budget-would-cl...
Is there a meaningful pattern or logic to explain the combination of potential closures alongside new construction?
NIST will reply with a key number and a key value. The reply will be by US mail only, e-mail will never be used.
The office that normally receives US mail and FAX messages currently has limited access, which may result in significant delays in processing requests
https://www.nist.gov/pml/time-and-frequency-division/time-se...
(things you discover when you implement fedramp)
From the link:
> The service will be provided at no charge, and user keys may be used to connect to any of the servers whose addresses are listed below. Additional hardware will be added in the future if the demand for the service is sufficiently great to warrant it.
Making it clear that they are going to shoulder the extra costs.
if you really want it, there are plenty of services that provide you with virtual mailbox in usa
New atomic fountain clock joins group that keeps the world on time (nist.gov) | 118 points | 76 days ago | 33 comments | https://news.ycombinator.com/item?id=43831792
Major leap for nuclear clock paves way for ultraprecise timekeeping (nist.gov) | 12 points | 7 months ago | 10 comments | https://news.ycombinator.com/item?id=42362215
I left a comment on the first that summarizes the second one, which describes how they're working on a new type of atomic "nuclear" clock based on the atomic nucleus instead of electron orbitals. It doesn't mention the accuracy, I wonder how it would compare to this "ion" clock.
You can build two and see how much they shift relative to each other. That gives you precision.
So what's the point of a clock if you just define it to be correct? Again, having two clocks is what makes it interesting. Some people have commented that according to general relatively there will be measuralbe time dilation, but there are other fun experiments, e.g.
- Measure shift of fundamental "constants": If you have two clocks that use different elements, the frequency ratio can be related to some things we thought were constants in the universe. If they shift, they aren't constant.
- Look for preferred directions in space: does one clock give a different reading if you turn it on its side?
- Some theories predict that dark matter might induce a frequency shift in these clocks. Put the clocks far apart and look for spacial modulations in the dark matter density.
- Measure anything else that had to be tweaked to make the clock stable. This includes the magnetic field, for example, so the clock is also a really sensitive magnetometer.
Consider the construction of a precision clock, then build 2 (or more) of them. Take a pair set them to the same initial time, and then let them count time (same height, etc).
Ideally, a pair of perfect clocks would display the same time over time. In practice you see the clocks slowly walk off with respect to each other (a systematic component and a random component), a reasonable first order approximation is to pretend the difference in displayed time shows a random walk behavior. A collection of a large number of clocks would behave like a collection of random walk instances, diffusing in delta-time space.
A poor clock construction would diffuse more quickly, and a better clock construction would diffuse more slowly.
One doesn't need a perfect clock reference to measure the random walk of a clock type / construction. Just compare 2 identically constructed and used ones.
For example even very small magnetic fields will change the clock speed, thermal changes will as well (so will lots of other things). So you try to shield from that, and keep the temperature stable (and of course you need to figure out every other things that could add noise).
Then you measure all those influences that you just are unable to control, and calculate what affect they have on the clock, and that's your accuracy number.
One way to directly measure that, instead of calculate it, is to have two identical clocks, synchronize them, and let them run. Then compare them, and see if they differ. (Watch out for relativity messing with time.)
For example, every electron is exactly the same as every other electron, they do not vary in the slightest. You utilize properties like that to make exact references to time.
it's a human construct so whatever is agreed upon is correct.
If "all the clocks are wrong" it doesn't matter as long as they are consistent. (in the case of atomic clocks, frequency of energy transitions within atoms)
All ntp servers get the average of atomic clocks, which is then distributed to all phones and computers.
If the constants from these atomic clocks "are a little bit wrong" it does not matter (for most human activities)
That's why we average them and distribute the average.
For physics related research, this new clock being more precise does have use, but for pretty much everything else, whatever constant we have is good enough as long as it's consistently used.
Back in the day it was someone just running around with a pocket watch giving everyone the time from the clock tower which was calibrated from a sundial and that was good enough.
Replace the sun's shadow with electron transitions and the timekeepers with ntp servers and that's what you have today.
They are used together with a laser (which is a component included in a so-called frequency comb, which acts as a frequency divider between the hundreds of THz of the optical signal and some hundreds of MHz or a few GHz of a clock signal that can be counted with a digital counter; that digital counter could be used as a date and time clock, except that you would need more such optical clocks, to guard against downtime; the present optical clocks do not succeed to operate for very long times before needing a reset because the trapped ion has been lost from the trap or the neutral atoms have been lost from the optical lattice; therefore you need many of them to implement a continuous time scale).
The laser is the one that provides a continuous signal. In this case the laser produces infrared light in the same band as the lasers used for optical fiber communications, and it is based on glass doped with erbium and ytterbium. The frequency of the laser is adjusted to match some resonance frequency of the trapped ion (in this case a submultiple of the frequency, because the frequency of the transition used in the aluminum ion is very high, in ultraviolet). For very short time intervals, when it cannot follow the reference frequency, because that must be filtered of noise, the stability of the laser frequency is determined by a resonant cavity made of silicon (which is transparent for the infrared light of the laser), which is cooled at a very low temperature, in order to improve its quality factor.
So this is similar to the behavior of the clock of a computer, which for long time intervals has the stability of the clocks used by the NTP servers used by it for synchronization, but for short time intervals it has the stability of its internal quartz oscillator.
This new optical atomic clock has the lowest ever uncertainty for the value of its reference frequency, but being a trapped single ion clock it has a higher noise than the clocks based on lattices of neutral atoms (because those can use thousands of atoms instead of one ion), so its output signal must be averaged over long times (e.g. many days) in order to reach the advertised accuracy.
For short averaging times, e.g. of one second, its accuracy is about a thousand times worse than the best attainable (however, its best accuracy is so high that even when averaged for a few seconds it is about as good as the best microwave clocks based on cesium or hydrogen).
These clocks can measure the difference in the flow of time between your head and your feet (and quite a lot more accurate than that)
Being able to count trillions of ticks is entirely possible in clocks or rotary encoders, just nobody bothers to do so on rotary encoders very often.
https://arxiv.org/abs/2504.13071 ("High-Stability Single-Ion Clock with $5.5\times10^{-19}$ Systematic Uncertainty")
I'm up to three running GPS clocks, one with OXCO, one with TXCO, and one with Rubidium holdover... going to finally be able to say what time it might be more intelligently :)
Also, I hope to meet CuriousMarc this weekend! He's done some fascinating things with Cesium clocks.
Lane awareness accuracy needs some serious looking at. Lane markings are designed for the human eye for obvious reasons. Those white lines and "cats eyes" are quite literally (lol) visual cues to keep you on track. If we want machines to drive for us then we will need other ways to do lane compliance, rather than just assist.
My Chinese designed MG4 EV has lane assist. It certainly has four optical cameras but beyond that, I don't know - I think it has a fore and aft proximity sensor of some sort. I do know it makes some dreadful errors of judgment with regards lanes. It's normally fine on a motorway or dual carriageway but gets confused on even a major A road (eg A30) when there are striations in the road surface or its wet or the sun is at the right angle to make skid marks look like lines.
If roads had something like a steel wire armoured cable buried underneath each lane and boundary line and there was a suitably cheap detector and sensors that could look far enough ahead and model the road then that might be a reasonably cheap option to augment optical systems.
Now, more sensors cost money, sorry, cost profits. They also add complexity and so on.
Sadly I think we will continue to see daft things like ... I gather that Teslas will only sport optical sensors and no LIDAR.
Autonomous cars are possible but they do need to be given a decent amount of, and variety of, ... sensors.
GPS will get you very well located on a map but you need local sensors to sort out localisation issues.
Please don't let them embezzle the future of scientific innovation.
The 'deal' of basic science in the US is that the government funds broadly and without prejudice. Topics are decided by experts and overseen by experts. These experts are taking large pay-cuts (compared to their worth in industry) to have the freedom to investigate their own interests. In return the public gets a vast amount of R&D on the cheap, much of which doesn't seem to have immediate ROI, but as we well know, has tremendous long term ROI.
When you have politicians deciding what is 'real' and 'pretend' then you break the fundamental deal by removing academic freedom.
I certainly agree there is some useless science. But in this case, the demagogues are politicizing some science as an excuse to cut all science and embezzle the funds. Cancer, alzheimer's, infectious disease. It's all being cut right now, all because people are mad some grad student took 15K to study the relationship of smell to the gender identity of frogs.
> NIST researchers have made the most accurate atomic clock to date — one that can measure time down to the 19th decimal place.
That's precision, not accuracy.
A single measurement cannot be precise. Precision is a measure of how close multiple measurements are to one another. Accuracy is how close a single measurement is to its true value.
A clock that can measure a point in time to 19 decimal places with respect to its true value is accurate.
A single measurement can _never_ be precise, it is simply not possible.
You can be accurate, precise, or both.
Said clock may be precise, but not accurate.
* https://rntfnd.org/2024/10/03/china-completes-national-elora...
As we've seen regularly, GPS/GNSS has major risks with it, and it seems to have become a single point of failure:
* https://www.marineinsight.com/shipping-news/msc-container-sh...
* https://gcaptain.com/gps-jamming-in-strait-of-hormuz-raises-...