The Wikipedia entry doesn't give details and the RFC is way too dense. Does anyone around here know, in a very general way, how NTP works?
I'm looking for an overview that explains how Marzullo's algorithm (or a modification of it) is employed to translate a timestamp on a server into a timestamp on a client. Specifically what mechanism is used to produce accuracy which is, on average, within 10ms when that communication takes place over a network with highly variable latency which is frequently several times that.
Information About Implementing NTPNTP uses the User Datagram Protocol (UDP) as its transport protocol. All NTP communication uses Coordinated Universal Time (UTC). An NTP network usually receives its time from an authoritative time source, such as a radio clock or an atomic clock attached to a time server.
Basically, the process starts when the NTP client sends a packet containing its timestamp to a server. When the server receives such a packet, it will in turn store its own timestamp and a transmit timestamp into the packet and send it back to the client.
NTP is used by servers, switches, routers and computers to synchronize their time over a network. It enables these devices to keep precise time by syncing their clocks with an external time source on a regular basis.
NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100 ms or more.
(This isn't Marzullo's algorithm. That's only used by the high-stratum servers to get really accurate time using several sources. This is how an ordinary client gets the time, using only one server)
First of all, NTP timestamps are stored as seconds since January 1, 1900. 32 bits for the number of seconds, and 32 bits for the fractions of a second.
The synchronization is tricky. The client stores the timestamp (say A) (all these values are in seconds) when it sends the request. The server sends a reply consisting of the "true" time when it received the packet (call that X) and the "true" time it will transmit the packet (Y). The client will receive that packet and log the time when it received it (B).
NTP assumes that the time spent on the network is the same for sending and receiving. Over enough intervals over sane networks, it should average out to be so. We know that the total transit time from sending the request to receiving the response was B-A seconds. We want to remove the time that the server spent processing the request (Y-X), leaving only the network traversal time, so that's B-A-(Y-X). Since we're assuming the network traversal time is symmetric, the amount of time it took the response to get from the server to the client is [B-A-(Y-X)]/2. So we know that the server sent its response at time Y, and it took us [B-A-(Y-X)]/2 seconds for that response to get to us.
So the true time when we received the response is Y+[B-A-(Y-X)]/2 seconds. And that's how NTP works.
Example (in whole seconds to make the math easy):
In a proper implementation, the client runs as a daemon, all the time. Over a long period of time with many samples, NTP can actually determine if the computer's clock is slow or fast, and automatically adjust it accordingly, allowing it to keep reasonably good time even if it is later disconnected from the network. Together with averaging the responses from the server, and application of more complicated thinking, you can get incredibly accurate times.
There's more, of course, to a proper implementation than that, but that's the gist of it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With