I'm using ApacheBench (ab) to measure the performance of two nginx on Linux. They have same config file. The Only difference is one of nginx is running in a docker container.
Nginx on Host System:
Running: ab -n 50000 -c 1000 http://172.17.0.2:7082/
Concurrency Level: 1000
Time taken for tests: 9.376 seconds
Complete requests: 50000
Failed requests: 0
Total transferred: 8050000 bytes
HTML transferred: 250000 bytes
Requests per second: 5332.94 [#/sec] (mean)
Time per request: 187.514 [ms] (mean)
Time per request: 0.188 [ms] (mean, across all concurrent requests)
Transfer rate: 838.48 [Kbytes/sec] received
Nginx in docker container:
Running: ab -n 50000 -c 1000 http://172.17.0.2:6066/
Concurrency Level: 1000
Time taken for tests: 31.274 seconds
Complete requests: 50000
Failed requests: 0
Total transferred: 8050000 bytes
HTML transferred: 250000 bytes
Requests per second: 1598.76 [#/sec] (mean)
Time per request: 625.484 [ms] (mean)
Time per request: 0.625 [ms] (mean, across all concurrent requests)
Transfer rate: 251.37 [Kbytes/sec] received
Just wondering why the container one has such a poor performance
nginx.conf:
worker_processes auto;
worker_rlimit_nofile 10240;
events {
use epoll;
multi_accept on;
worker_connections 4096;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 10;
client_header_timeout 10;
client_body_timeout 10;
send_timeout 10;
tcp_nopush on;
tcp_nodelay on;
server {
listen 80;
server_name localhost;
location / {
return 200 'hello';
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
nginx:<version>It is designed to be used both as a throw away container (mount your source code and start the container to start your app), as well as the base to build other images off of.
NGINX is a popular lightweight web application that is used for developing server-side applications. It is an open-source web server that is developed to run on a variety of operating systems. Since nginx is a popular web server for development, Docker has ensured that it has support for nginx.
Docker images therefore seem like a good way to get a reproducible environment for measuring CPU performance of your code. There are, however, complications. Sometimes, running under Docker can actually slow down your code and distort your performance measurements.
I'd like to add to @Andrian Mouat's answer, something I've just found in the docs.
It is written in the Docker run reference:
NETWORK: HOST
Compared to the default
bridge
mode, thehost
mode gives significantly better networking performance since it uses the host’s native networking stack whereas the bridge has to go through one level of virtualization through the docker daemon.It is recommended to run containers in this mode when their networking performance is critical, for example, a production Load Balancer or a High Performance Web Server.
When using the host’s native networking stack with --net=host
, there are fewer system calls and this is clearly depicted in the following Flame Graphs. Details:
sudo perf record -F 99 -a -g -- sleep 30
ab -n 50000 -c 1000 http://my-host-ip/
(takes place while capturing)For more info on Flame Graphs, check Brendan Gregg's website: www.brendangregg.com/
-p 80:80
:Full picture here
Zoomed to nginx
part:
--net=host
:Full picture here
Zoomed to nginx
part:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With