I am trying to improve the performance of a web app. Profiling the app itself, I found its response time are quite acceptable (100ms-200ms), but when I use ApacheBench to test the app, the response time sometimes exceeds 1 second. When I looked closely at the logs, I found a big discrepancy between request_time
and upstream_response_time
occasionally:
"GET /wsq/p/12 HTTP/1.0" 200 114081 "-" "ApacheBench/2.3" 0.940 0.286 "GET /wsq/p/31 HTTP/1.0" 200 114081 "-" "ApacheBench/2.3" 0.200 0.086
The upstream_response_time
is quite close to my profiling in the web app, but request_time
is close to one second for the first request.
What could cause this discrepancy?
I understand request_time
is recorded from the first byte received to last response byte sent, it can be affected by network condition and client problem. I am wondering what should I do to reduce the average request_time
as much as possible?
$request_time – Full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body. Which means by "last byte sent to the client" that it also includes the client response time.
$request_time: request processing time in seconds with a milliseconds resolution; time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client。
A high request_time
may be, among others, due to a client with a slow connection, for which you can't do much about. Thus, a high request_time
does not necessarily represent the performance of your server and/or application.
You really should not spend too much time on request_time
when profiling but instead measure things like the application's response time (ie. upstream_response_time
).
That said, there are some things which you are able to do and may affect the request_time
. Some of them are the following:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With