It's about the Network Time Protocol, specified in RFC 5905.
I understand that the Root Delay field of NTPv4 packets (or the Synchronizing Distance field, as it is named in the previous version NTPv3) is a number indicating the estimated roundtrip delay to the primary/reference clock.
But, why does the protocol need to know the delay to the primary clock? As described in the specification, it properly uses the Origin, Receive and Transmit timestamp fields to calculate the correct time (in cooperation with the server/peer), and doesn't even make any direct or indirect communication with the primary clock in the time calculation "trip".
Is it because the server/peer has been synchronized by the reference clock in the past, and now wants to inform the client about the delay that has taken place?
By the way, what is the meaning of the related Root Dispersion field? Unfortunately, I didn't understand the dispersion concept, which isn't really explained in detail.
ntpd not only sets the local clock but can also act as a timeserver for other ntp clients. To do this it needs to know its accuracy. To calculate this it looks at the min and max round-trip delay from the (presumably perfect) root clock as well as its own error in its system clock. It can then advertise how good its clock is to clients.
Root delay DELTA is the range of delays (max - min) from the root clock. The error contributed by this part is assumed to be DELTA / 2.
Dispersion is the error in the local system clock since it was last synchronized with the upstream clock. So this will look like a sawtooth function that drops to 0 when synced, then grows linearly until the next sync.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With