Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Measuring elapsed time over network

Tags:

c++

time

I have developed a server and client application for streaming video frames from one end to another using RTSP. Now, in order to gather statistics which will assist me in improving my applications, I need to measure the elapsed time between sending the frame and receiving the frame.

At the moment I am using the following formula:

Client_Receive_Timestamp - Server_Send_Timestamp = Elapsed_Time

Problem

It seems to me that the elapsed time is about 100-200ms too high. I think the reason is that the server clock and client clock are not in sync and have a difference of about 100-200ms.

Question

How can I accurately measure the elapsed time between the two mashines?

The topic Accurately measuring elapsed time between machines suggests calculating a round-trip delay. However, I can't use this solution as the client doesn't request the frames. It simply receives frames via RTSP.

like image 624
chrisp Avatar asked Sep 17 '15 10:09

chrisp


2 Answers

Assuming

  • you want to measure only the latency of the one-way trip
  • each machine provides a clock of acceptable resolution
  • both clocks can be synchronized to acceptable accuracy

then you can simply subtract the "sent timestamp" from the "received timestamp" to obtain the latency duration. The observed error will be less than the sum of both clock errors. If the time scales are small enough (probably anything smaller than an hour) you can reasonably ignore slew effects.

If ntpd is not already running on both machines, and if you have the necessary permissions, then you can

$ sudo ntpdate -v pool.ntp.org

to force a synchronization with the pool of publicly-available time servers.

Then you can use the c++11 high_resolution_clock to calculate the duration:

/* hrc.cc */
#include <chrono>
#include <iostream>

int main(int,char**){
  using std::chrono::high_resolution_clock;
  // send something                                                                                                                      
  high_resolution_clock::time_point start = high_resolution_clock::now();
  std::cout << "time this" << std::endl ;
  // receive something                                                                                                                   
  high_resolution_clock::time_point stop = high_resolution_clock::now();
  std::cout
    << "duration == "
    << std::chrono::duration_cast<std::chrono::nanoseconds>(stop-start).count()
    << "ns"
    << std::endl
    ;
  return 0;
}

Here's what the previous example looks like on my system:

$ make hrc && ./hrc
c++     hrc.cc   -o hrc
time this
duration == 32010ns
like image 68
rubicks Avatar answered Sep 18 '22 01:09

rubicks


I need to measure the elapsed time between sending the frame and receiving the frame.

You don't need precise timestamps for this. You can average the estimated latency.

If A sends the packet (or a frame) to B, B responds immediately(*):

A(sendTime) ---> B ---> A(receivedTime)

then you can calculate the latency easily:

latency = (receivedTime - sendTime) / 2

This assumes of course that the latency is symmetrical. You can find more elaborate algorithms if you research "network latency estimation algorithm" phrases.

Having the estimated latency you can of course estimate time difference (but it doesn't seem necessary):

A(sendTime) ---> B(receivedTimeB) --(receivedTimeB)--> A

timeDelta = sendTime + latency - receivedTimeB

Note that even if you average many results, this algorithm is probably highly biased. This is just posted as a simple example to the general idea.


(*) The fact that it does not happen really immediately induces an error of course. This depends on how heavily machine B is loaded.

like image 38
BartoszKP Avatar answered Sep 19 '22 01:09

BartoszKP