Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How To Compute HTTP request processing time without network latency?

Tags:

http

tcp

latency

Because of geographic distance between server and client network latency can vary a lot. So I want to get "pure" req. processing time of service without network latency.

I want to get network latency as TCP connecting time. As far as I understand this time depends a lot on network.

Main idea is to compute:

  • TCP connecting time,
  • TCP first packet receive time,
  • Get "pure" service time = TCP first packet receive (waiting time) – TCP connecting.

I divide TCP connecting by 2 because in fact there are 2 requests-response (3-way handshake).

I have two questions:

  1. Should I compute TCP all packets receive time instead of only first packet?
  2. Is this method okay in general?

PS: As a tool I use Erlang's gen_tcp. I can show the code.

like image 924
Dmitry Dushkin Avatar asked Mar 04 '13 13:03

Dmitry Dushkin


2 Answers

If at all, i guess the "pure" service time = TCP first packet receive - TCP connecting.. You have written other way round.

A possible answer to your first questions is , you should ideally compute atleast some sort of average by considering pure service time of many packets rather than just first packet.

Ideally it can also have worst case, average case, best case service times.

For second question to be answered we would need why would you need pure service time only. I mean since it is a network application, network latencies(connection time etc...) should also be included in the "response time", not just pure service time. That is my view based on given information.

like image 120
goldenmean Avatar answered Sep 28 '22 15:09

goldenmean


I have worked on a similar question when working for a network performance monitoring vendor in the past. IMHO, there are a certain number of questions to be asked before proceeding:

  • connection time and latency: if you base your network latency metric, beware that it takes into account 3 steps: Client sends a TCP/SYN, Server responds with a TCP/SYN-ACK, the Client responds by a final ACK to set up the TCP connection. This means that the CT is equivalent to 1.5 RTT (round trip time). This validates taking the first two steps of the TCP setup process in acccount like you mention.
  • Taking in account later TCP exchanges: while this first sounds like a great idea to keep evaluating network latency in the course of the session, this becomes a lot more tricky. Here is why: 1. Not all packets have to be acknowledged (RFC1122 or https://en.wikipedia.org/wiki/TCP_delayed_acknowledgment) , which will generate false measurements when it occurs, so you will need an heuristic to take these off your calculations. 2. Not all systems consider acknowledging packets a high priority tasks. This means that some high values will pollute your network latency data and simply reflect the level of load of the server for example.

So if you use only the first (and reliable) measurement you may miss some network delay variation (especially in apps using long lasting TCP sessions).

like image 42
Thomas Rossignol Avatar answered Sep 28 '22 14:09

Thomas Rossignol