Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Eventloop has high ksoftirqd load; nginx does not but does same system-calls. Why?

I wrote some code that has an epoll-eventloop, accepts new connections and pretends to be a http-server. The posted code is the absolute minimum ... I removed everything (including all error-checks) to make it as short and to the point as possible:

#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sys/epoll.h>
#include <sys/socket.h>
#include <netinet/ip.h>
#include <netinet/in.h>
#include <sys/uio.h>
#include <unistd.h>


int main () {
    int servFd = socket (AF_INET, SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC, IPPROTO_IP);
    int value = 1;
    setsockopt (servFd, SOL_SOCKET, SO_REUSEADDR, &value, sizeof (value));

    struct sockaddr_in servAddr;
    memset (&servAddr, 0, sizeof (servAddr));
    servAddr.sin_family = AF_INET;
    servAddr.sin_addr.s_addr = 0;
    servAddr.sin_port = htons (8081);
    bind (servFd, (struct sockaddr*)&servAddr, sizeof (servAddr));
    listen (servFd, 511);

    int efd = epoll_create1 (EPOLL_CLOEXEC);
    struct epoll_event epollEvt;
    epollEvt.events = EPOLLIN | EPOLLRDHUP;
    epollEvt.data.u32 = servFd;
    epoll_ctl (efd, EPOLL_CTL_ADD, servFd, &epollEvt);

    for (;;) {
        struct epoll_event pollEvent[512];
        int eventCount = epoll_wait (efd, pollEvent, 512, -1);
        for (int i = 0; i < eventCount; ++i) {
            struct epoll_event* curEvent = &pollEvent[i];
            if (curEvent->data.u32 == servFd) {
                int clientFd = accept4 (servFd, NULL, NULL, SOCK_NONBLOCK | SOCK_CLOEXEC);
                struct epoll_event epollEvt;
                epollEvt.events = EPOLLIN | EPOLLRDHUP | EPOLLET;
                epollEvt.data.u32 = clientFd;
                epoll_ctl (efd, EPOLL_CTL_ADD, clientFd, &epollEvt);
                continue;
            }

            int clientFd = curEvent->data.u32;
            char recvBuffer[2048];
            recvfrom (clientFd, recvBuffer, 2048, 0, NULL, NULL);
            char sndMsg[] = "HTTP/1.1 200 OK\r\nServer: nginx/1.12.2\r\nDate: Tue, 10 Apr 2018 17:41:57 GMT\r\nContent-Type: text/plain\r\nContent-Length: 0\r\nConnection: close\r\n\r\n";
            size_t sndMsgLength = sizeof (sndMsg) - 1;
            struct iovec sndBuffer;
            sndBuffer.iov_base = sndMsg;
            sndBuffer.iov_len = sndMsgLength;
            writev (clientFd, &sndBuffer, 1);
            close (clientFd);
        }
    }
    return 0;
}

localhost:~# gcc -Wall -O2 test.c -o test

localhost:~# gcc --version
gcc (Alpine 6.4.0) 6.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

I did some load-testing of this code and compared it with nginx to see if I did something wrong or if there is something to improve. I expected this code to be the fastest possible implementation since every other "real" webserver has to do a lot more stuff in userspace. But still ... somehow nginx beats it in requests per second when using multiple load-generator threads. (Note that I configured nginx to use just one worker to handle every request)

//ab running 1 worker from local machine 
localhost:~# ab -n 100000 -c 1 http://10.0.0.111:8081/index.html
Requests per second:    13228.07 [#/sec] (mean)  //[to nginx]
Requests per second:    15300.35 [#/sec] (mean)  //[to testcode]
//ab running 4 worker from local machine 
localhost:~# ab -n 100000 -c 4 http://10.0.0.111:8081/index.html
Requests per second:    30902.63 [#/sec] (mean)  //[to nginx]
Requests per second:    24734.76 [#/sec] (mean)  //[to testcode]

The first test has the expected result ... the test code is faster since it doesn't do anything except generating a hard-coded response. But why is nginx faster in a multi-threading setting? How can that be?
The only explanation I can come up with is that there is something different in kernel-space and that nginx uses some sockopts (like TCP_FASTOPEN or TCP_DEFER_ACCEPT) or maybe some other system-calls to do its thing. Thats why I did some straces and made my code do the exact same thing as nginx does (from a kernel-perspective) --> you can see the strace attached below. Still ... it is faster and I don't understand why.

//ab running 50 worker from remote machine 
localhost:~# ab -n 100000 -c 50 http://10.0.0.111:8081/index.html
Requests per second:    27507.60 [#/sec] (mean)  //[to nginx]
Requests per second:    24208.51 [#/sec] (mean)  //[to testcode]

This test-cast has the exact same result but I noticed some difference in CPU-usage.

  • My test-code runs at about 60% CPU-load and ksoftirqd/0 runs at about 80%
  • nginx runs at about 99% CPU-load and ksoftirqd/0 runs at just 30%
  • ksoftirqd/0 has no noticeable CPU-load in the local-host setting in both cases

sTrace of nginx:

localhost:~# strace -tt -f /usr/sbin/nginx 
//Removed loading of libraries to stay under the 30k char body limit for this question
13:28:20.426369 open("/etc/localtime", O_RDONLY|O_NONBLOCK|O_CLOEXEC) = 3
13:28:20.426549 fstat(3, {st_mode=S_IFREG|0644, st_size=127, ...}) = 0
13:28:20.426723 mmap(NULL, 127, PROT_READ, MAP_SHARED, 3, 0) = 0x74ae1cf8c000
13:28:20.426847 close(3)                = 0
13:28:20.427023 getpid()                = 2186
13:28:20.427164 open("/var/lib/nginx/logs/error.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = 3
13:28:20.427341 brk(0xd104603e000)      = 0xd104603e000
13:28:20.427503 open("/etc/ssl/openssl.cnf", O_RDONLY) = 4
13:28:20.427680 brk(0xd104603f000)      = 0xd104603f000
13:28:20.427819 readv(4, [{iov_base="", iov_len=0}, {iov_base="[ req ]\n#default_bits\t\t= 2048\n#d"..., iov_len=1024}], 2) = 745
13:28:20.428089 brk(0xd1046040000)      = 0xd1046040000
13:28:20.428243 readv(4, [{iov_base="", iov_len=0}, {iov_base="", iov_len=1024}], 2) = 0
13:28:20.428476 close(4)                = 0
13:28:20.428718 brk(0xd1046041000)      = 0xd1046041000
13:28:20.428880 brk(0xd1046042000)      = 0xd1046042000
13:28:20.429179 brk(0xd1046043000)      = 0xd1046043000
13:28:20.429319 brk(0xd1046044000)      = 0xd1046044000
13:28:20.429552 brk(0xd1046045000)      = 0xd1046045000
13:28:20.429775 brk(0xd1046046000)      = 0xd1046046000
13:28:20.429935 brk(0xd1046047000)      = 0xd1046047000
13:28:20.430220 brk(0xd1046048000)      = 0xd1046048000
13:28:20.430391 brk(0xd1046049000)      = 0xd1046049000
13:28:20.430515 brk(0xd104604b000)      = 0xd104604b000
13:28:20.430796 brk(0xd104604c000)      = 0xd104604c000
13:28:20.431042 brk(0xd104604d000)      = 0xd104604d000
13:28:20.431238 brk(0xd104604e000)      = 0xd104604e000
13:28:20.431396 brk(0xd104604f000)      = 0xd104604f000
13:28:20.431581 brk(0xd1046050000)      = 0xd1046050000
13:28:20.431820 brk(0xd1046051000)      = 0xd1046051000
13:28:20.432112 brk(0xd1046054000)      = 0xd1046054000
13:28:20.432374 brk(0xd1046055000)      = 0xd1046055000
13:28:20.432509 brk(0xd1046056000)      = 0xd1046056000
13:28:20.432666 brk(0xd1046057000)      = 0xd1046057000
13:28:20.432836 brk(0xd1046058000)      = 0xd1046058000
13:28:20.433004 brk(0xd1046059000)      = 0xd1046059000
13:28:20.433203 brk(0xd104605a000)      = 0xd104605a000
13:28:20.433400 brk(0xd104605b000)      = 0xd104605b000
13:28:20.433610 brk(0xd104605c000)      = 0xd104605c000
13:28:20.433740 brk(0xd104605d000)      = 0xd104605d000
13:28:20.433895 brk(0xd104605e000)      = 0xd104605e000
13:28:20.434020 brk(0xd104605f000)      = 0xd104605f000
13:28:20.434240 brk(0xd1046060000)      = 0xd1046060000
13:28:20.434419 brk(0xd1046061000)      = 0xd1046061000
13:28:20.434622 uname({sysname="Linux", nodename="localhost", ...}) = 0
13:28:20.434801 sched_getaffinity(0, 128, [0, 1, 2, 3, 4, 5]) = 32
13:28:20.435077 prlimit64(0, RLIMIT_NOFILE, NULL, {rlim_cur=1024, rlim_max=4*1024}) = 0
13:28:20.435260 brk(0xd1046066000)      = 0xd1046066000
13:28:20.435424 uname({sysname="Linux", nodename="localhost", ...}) = 0
13:28:20.435578 brk(0xd104606b000)      = 0xd104606b000
13:28:20.435700 open("/etc/nginx/nginx.conf", O_RDONLY) = 4
13:28:20.435912 fstat(4, {st_mode=S_IFREG|0644, st_size=2781, ...}) = 0
13:28:20.436115 pread64(4, "\n\n\nuser nginx;\npcre_jit on;\n#tim"..., 2781, 0) = 2781
13:28:20.436284 geteuid()               = 0
13:28:20.436440 open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 5
13:28:20.436596 fcntl(5, F_SETFD, FD_CLOEXEC) = 0
13:28:20.436725 fcntl(5, F_SETFD, FD_CLOEXEC) = 0
13:28:20.436857 readv(5, [{iov_base="", iov_len=0}, {iov_base="root:x:0:0:root:/root:/bin/ash\nb"..., iov_len=1024}], 2) = 1024
13:28:20.437047 readv(5, [{iov_base="", iov_len=0}, {iov_base="sbin/nologin\nntp:x:123:123:NTP:/"..., iov_len=1024}], 2) = 397
13:28:20.437235 lseek(5, -43, SEEK_CUR) = 1378
13:28:20.437353 close(5)                = 0
13:28:20.437520 open("/etc/group", O_RDONLY|O_CLOEXEC) = 5
13:28:20.437684 fcntl(5, F_SETFD, FD_CLOEXEC) = 0
13:28:20.437800 fcntl(5, F_SETFD, FD_CLOEXEC) = 0
13:28:20.437933 readv(5, [{iov_base="", iov_len=0}, {iov_base="root:x:0:root\nbin:x:1:root,bin,d"..., iov_len=1024}], 2) = 776
13:28:20.438097 close(5)                = 0
13:28:20.438240 epoll_create1(0)        = 5
13:28:20.438429 close(5)                = 0
13:28:20.438681 brk(0xd1046070000)      = 0xd1046070000
13:28:20.438842 brk(0xd1046072000)      = 0xd1046072000
13:28:20.439053 brk(0xd1046074000)      = 0xd1046074000
13:28:20.439175 brk(0xd1046076000)      = 0xd1046076000
13:28:20.439418 brk(0xd104607b000)      = 0xd104607b000
13:28:20.439655 mmap(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x74ae1ce8b000
13:28:20.439886 brk(0xd1046080000)      = 0xd1046080000
13:28:20.440020 brk(0xd1046085000)      = 0xd1046085000
13:28:20.440225 open("/etc/nginx/mime.types", O_RDONLY) = 5
13:28:20.440380 fstat(5, {st_mode=S_IFREG|0644, st_size=3957, ...}) = 0
13:28:20.440523 pread64(5, "\ntypes {\n    text/html          "..., 3957, 0) = 3957
13:28:20.440725 close(5)                = 0
13:28:20.440977 brk(0xd104608a000)      = 0xd104608a000
13:28:20.441297 brk(0xd104608c000)      = 0xd104608c000
13:28:20.441453 close(4)                = 0
13:28:20.441587 mkdir("/var/tmp/nginx/client_body", 0700) = -1 EEXIST (File exists)
13:28:20.441814 stat("/var/tmp/nginx/client_body", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
13:28:20.442022 mkdir("/var/tmp/nginx/proxy", 0700) = -1 EEXIST (File exists)
13:28:20.442149 stat("/var/tmp/nginx/proxy", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
13:28:20.442257 mkdir("/var/tmp/nginx/fastcgi", 0700) = -1 EEXIST (File exists)
13:28:20.442407 stat("/var/tmp/nginx/fastcgi", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
13:28:20.442568 mkdir("/var/tmp/nginx/uwsgi", 0700) = -1 EEXIST (File exists)
13:28:20.442732 stat("/var/tmp/nginx/uwsgi", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
13:28:20.442945 mkdir("/var/tmp/nginx/scgi", 0700) = -1 EEXIST (File exists)
13:28:20.443051 stat("/var/tmp/nginx/scgi", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
13:28:20.443229 open("/var/log/nginx/access.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = 4
13:28:20.443417 fcntl(4, F_SETFD, FD_CLOEXEC) = 0
13:28:20.443586 open("/var/log/nginx/error.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = 5
13:28:20.443750 fcntl(5, F_SETFD, FD_CLOEXEC) = 0
13:28:20.443889 open("/var/lib/nginx/logs/error.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = 6
13:28:20.444040 fcntl(6, F_SETFD, FD_CLOEXEC) = 0
13:28:20.444197 mmap(NULL, 2097152, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANONYMOUS, -1, 0) = 0x74ae1c0a0000
13:28:20.444382 socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 7
13:28:20.444515 setsockopt(7, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
13:28:20.444680 ioctl(7, FIONBIO, [1])  = 0
13:28:20.444808 bind(7, {sa_family=AF_INET, sin_port=htons(8081), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
13:28:20.445015 listen(7, 511)          = 0
13:28:20.445140 listen(7, 511)          = 0
13:28:20.445326 mmap(NULL, 65536, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x74ae1ce7b000
13:28:20.445493 prlimit64(0, RLIMIT_NOFILE, NULL, {rlim_cur=1024, rlim_max=4*1024}) = 0
13:28:20.445671 mmap(NULL, 1280, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANONYMOUS, -1, 0) = 0x74ae1ce7a000
13:28:20.445817 rt_sigprocmask(SIG_UNBLOCK, [RT_1 RT_2], NULL, 8) = 0
13:28:20.445977 rt_sigaction(SIGHUP, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.446097 rt_sigaction(SIGUSR1, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.446247 rt_sigaction(SIGWINCH, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.446438 rt_sigaction(SIGTERM, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.446635 rt_sigaction(SIGQUIT, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.446886 rt_sigaction(SIGUSR2, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.447093 rt_sigaction(SIGALRM, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.447236 rt_sigaction(SIGINT, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.447446 rt_sigaction(SIGIO, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.447767 rt_sigaction(SIGCHLD, {sa_handler=0xd1041f1f3fc, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.447888 rt_sigaction(SIGSYS, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.448094 rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x74ae1cd4a6cf}, NULL, 8) = 0
13:28:20.448253 rt_sigprocmask(SIG_BLOCK, ~[], [], 8) = 0
13:28:20.448396 fork(strace: Process 2187 attached
)                  = 2187
[pid  2187] 13:28:20.448594 gettid( <unfinished ...>
[pid  2186] 13:28:20.448643 rt_sigprocmask(SIG_SETMASK, [],  <unfinished ...>
[pid  2187] 13:28:20.448671 <... gettid resumed> ) = 2187
[pid  2186] 13:28:20.448700 <... rt_sigprocmask resumed> NULL, 8) = 0
[pid  2187] 13:28:20.448765 rt_sigprocmask(SIG_SETMASK, [],  <unfinished ...>
[pid  2186] 13:28:20.448792 exit_group(0 <unfinished ...>
[pid  2187] 13:28:20.448812 <... rt_sigprocmask resumed> NULL, 8) = 0
[pid  2186] 13:28:20.448836 <... exit_group resumed>) = ?
[pid  2187] 13:28:20.448854 getpid()    = 2187
[pid  2187] 13:28:20.448951 setsid( <unfinished ...>
[pid  2186] 13:28:20.449046 +++ exited with 0 +++
13:28:20.449055 <... setsid resumed> )  = 2187
13:28:20.449107 umask(000)              = 022
13:28:20.449212 open("/dev/null", O_RDWR) = 8
13:28:20.449309 dup2(8, 0)              = 0
13:28:20.449455 dup2(8, 1)              = 1
13:28:20.449573 close(8)                = 0
13:28:20.449692 open("/run/nginx/nginx.pid", O_RDWR|O_CREAT|O_TRUNC, 0644) = 8
13:28:20.449848 pwrite64(8, "2187\n", 5, 0) = 5
13:28:20.449978 close(8)                = 0
13:28:20.450111 dup2(6, 2)              = 2
13:28:20.450218 close(3)                = 0
13:28:20.450376 rt_sigprocmask(SIG_BLOCK, [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO], NULL, 8) = 0
13:28:20.450499 socketpair(AF_UNIX, SOCK_STREAM, 0, [3, 8]) = 0
13:28:20.450603 ioctl(3, FIONBIO, [1])  = 0
13:28:20.450696 ioctl(8, FIONBIO, [1])  = 0
13:28:20.450830 ioctl(3, FIOASYNC, [1]) = 0
13:28:20.450964 fcntl(3, F_SETOWN, 2187) = 0
13:28:20.451079 fcntl(3, F_SETFD, FD_CLOEXEC) = 0
13:28:20.451148 fcntl(8, F_SETFD, FD_CLOEXEC) = 0
13:28:20.451244 rt_sigprocmask(SIG_BLOCK, ~[], [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO], 8) = 0
13:28:20.451379 fork(strace: Process 2188 attached
 <unfinished ...>
[pid  2188] 13:28:20.451596 gettid( <unfinished ...>
[pid  2187] 13:28:20.451615 <... fork resumed> ) = 2188
[pid  2187] 13:28:20.451727 rt_sigprocmask(SIG_SETMASK, [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO],  <unfinished ...>
[pid  2188] 13:28:20.451754 <... gettid resumed> ) = 2188
[pid  2187] 13:28:20.451774 <... rt_sigprocmask resumed> NULL, 8) = 0
[pid  2188] 13:28:20.451942 rt_sigprocmask(SIG_SETMASK, [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO],  <unfinished ...>
[pid  2187] 13:28:20.451969 rt_sigsuspend([], 8 <unfinished ...>
[pid  2188] 13:28:20.451985 <... rt_sigprocmask resumed> NULL, 8) = 0
[pid  2188] 13:28:20.452053 getpid()    = 2188
[pid  2188] 13:28:20.452330 rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1 RT_2], [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO], 8) = 0
[pid  2188] 13:28:20.452621 rt_sigprocmask(SIG_BLOCK, ~[], NULL, 8) = 0
[pid  2188] 13:28:20.452893 prlimit64(0, RLIMIT_NOFILE, {rlim_cur=8*1024, rlim_max=8*1024}, NULL) = 0
[pid  2188] 13:28:20.453075 futex(0x74ae1cf95064, FUTEX_WAKE_PRIVATE, 2147483647) = 0
[pid  2188] 13:28:20.453279 rt_sigprocmask(SIG_SETMASK, [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO], NULL, 8) = 0
[pid  2188] 13:28:20.453487 geteuid()   = 0
[pid  2188] 13:28:20.453667 rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1 RT_2], [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO], 8) = 0
[pid  2188] 13:28:20.453861 rt_sigprocmask(SIG_BLOCK, ~[], NULL, 8) = 0
[pid  2188] 13:28:20.454091 setgid(103) = 0
[pid  2188] 13:28:20.454335 futex(0x74ae1cf95064, FUTEX_WAKE_PRIVATE, 2147483647) = 0
[pid  2188] 13:28:20.454583 rt_sigprocmask(SIG_SETMASK, [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO], NULL, 8) = 0
[pid  2188] 13:28:20.454822 socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0) = 9
[pid  2188] 13:28:20.455183 connect(9, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 24) = -1 ENOENT (No such file or directory)
[pid  2188] 13:28:20.455537 close(9)    = 0
[pid  2188] 13:28:20.455800 open("/etc/group", O_RDONLY|O_CLOEXEC) = 9
[pid  2188] 13:28:20.456030 fcntl(9, F_SETFD, FD_CLOEXEC) = 0
[pid  2188] 13:28:20.456331 fcntl(9, F_SETFD, FD_CLOEXEC) = 0
[pid  2188] 13:28:20.456544 readv(9, [{iov_base="", iov_len=0}, {iov_base="root:x:0:root\nbin:x:1:root,bin,d"..., iov_len=1024}], 2) = 776
[pid  2188] 13:28:20.456799 readv(9, [{iov_base="", iov_len=0}, {iov_base="", iov_len=1024}], 2) = 0
[pid  2188] 13:28:20.456956 close(9)    = 0
[pid  2188] 13:28:20.457134 setgroups(3, [103, 82, 103]) = 0
[pid  2188] 13:28:20.457365 rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1 RT_2], [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO], 8) = 0
[pid  2188] 13:28:20.457534 rt_sigprocmask(SIG_BLOCK, ~[], NULL, 8) = 0
[pid  2188] 13:28:20.457818 setuid(102) = 0
[pid  2188] 13:28:20.457990 futex(0x74ae1cf95064, FUTEX_WAKE_PRIVATE, 2147483647) = 0
[pid  2188] 13:28:20.458159 rt_sigprocmask(SIG_SETMASK, [HUP INT QUIT USR1 USR2 ALRM TERM CHLD WINCH IO], NULL, 8) = 0
[pid  2188] 13:28:20.458378 prctl(PR_SET_DUMPABLE, SUID_DUMP_USER) = 0
[pid  2188] 13:28:20.458598 chdir("/var/www") = 0
[pid  2188] 13:28:20.458868 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
[pid  2188] 13:28:20.459703 epoll_create1(0) = 9
[pid  2188] 13:28:20.459994 eventfd2(0, 0) = 10
[pid  2188] 13:28:20.460340 epoll_ctl(9, EPOLL_CTL_ADD, 10, {EPOLLIN|EPOLLET, {u32=1109208384, u64=14363479846208}}) = 0
[pid  2188] 13:28:20.460600 eventfd2(0, 0) = 11
[pid  2188] 13:28:20.460878 ioctl(11, FIONBIO, [1]) = 0
[pid  2188] 13:28:20.461043 io_setup(32, [0x74ae1ce79000]) = 0
[pid  2188] 13:28:20.461389 epoll_ctl(9, EPOLL_CTL_ADD, 11, {EPOLLIN|EPOLLET, {u32=1109208032, u64=14363479845856}}) = 0
[pid  2188] 13:28:20.461729 socketpair(AF_UNIX, SOCK_STREAM, 0, [12, 13]) = 0
[pid  2188] 13:28:20.462043 epoll_ctl(9, EPOLL_CTL_ADD, 12, {EPOLLIN|EPOLLRDHUP|EPOLLET, {u32=1109208032, u64=14363479845856}}) = 0
[pid  2188] 13:28:20.462255 close(13)   = 0
[pid  2188] 13:28:20.462608 epoll_pwait(9, [{EPOLLIN|EPOLLHUP|EPOLLRDHUP, {u32=1109208032, u64=14363479845856}}], 1, 5000, NULL, 8) = 1
[pid  2188] 13:28:20.462969 close(12)   = 0
[pid  2188] 13:28:20.463325 mmap(NULL, 987136, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x74ae1bfaf000
[pid  2188] 13:28:20.463517 mmap(NULL, 397312, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x74ae1ce18000
[pid  2188] 13:28:20.464039 mmap(NULL, 397312, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x74ae1cdb7000
[pid  2188] 13:28:20.466039 epoll_ctl(9, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLRDHUP, {u32=469430304, u64=128291142561824}}) = 0
[pid  2188] 13:28:20.466432 close(3)    = 0
[pid  2188] 13:28:20.466763 epoll_ctl(9, EPOLL_CTL_ADD, 8, {EPOLLIN|EPOLLRDHUP, {u32=469430544, u64=128291142562064}}) = 0
//Eventloop starts here
[pid  2188] 13:28:20.467046 epoll_pwait(9, [{EPOLLIN, {u32=469430304, u64=128291142561824}}], 512, -1, NULL, 8) = 1
[pid  2188] 13:28:34.390021 accept4(7, {sa_family=AF_INET, sin_port=htons(54280), sin_addr=inet_addr("10.0.0.15")}, [112->16], SOCK_NONBLOCK) = 3
[pid  2188] 13:28:34.390110 epoll_ctl(9, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLRDHUP|EPOLLET, {u32=469430784, u64=128291142562304}}) = 0
[pid  2188] 13:28:34.390188 epoll_pwait(9, [{EPOLLIN, {u32=469430784, u64=128291142562304}}], 512, 30000, NULL, 8) = 1
[pid  2188] 13:28:34.390245 recvfrom(3, "GET /index.html HTTP/1.0\r\nHost: "..., 2048, 0, NULL, NULL) = 93
[pid  2188] 13:28:34.390462 writev(3, [{iov_base="HTTP/1.1 200 OK\r\nServer: nginx/1"..., iov_len=142}], 1) = 142
[pid  2188] 13:28:34.390602 close(3)    = 0

sTrace of the test-code:

localhost:/~# strace -tt -f ./test 
18:07:49.511351 execve("./test", ["./test"], 0x735df8f6c0a0 /* 16 vars */) = 0
18:07:49.514382 arch_prctl(ARCH_SET_FS, 0x71a7644fdb88) = 0
18:07:49.515405 set_tid_address(0x71a7644fdbc0) = 2325
18:07:49.515983 mprotect(0x71a7644fa000, 4096, PROT_READ) = 0
18:07:49.516742 mprotect(0x322e536d000, 4096, PROT_READ) = 0
18:07:49.517329 socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 3
18:07:49.518692 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
18:07:49.519955 bind(3, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
18:07:49.521092 listen(3, 511)          = 0
18:07:49.521764 epoll_create1(EPOLL_CLOEXEC) = 4
18:07:49.522239 epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLRDHUP, {u32=3, u64=3}}) = 0
18:07:49.523392 epoll_pwait(4, [{EPOLLIN, {u32=3, u64=3}}], 512, -1, NULL, 8) = 1
18:07:52.616270 accept4(3, NULL, NULL, SOCK_CLOEXEC|SOCK_NONBLOCK) = 5
18:07:52.617149 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLRDHUP|EPOLLET, {u32=5, u64=5}}) = 0
18:07:52.618188 epoll_pwait(4, [{EPOLLIN, {u32=5, u64=5}}], 512, -1, NULL, 8) = 1
18:07:52.619472 recvfrom(5, "GET /index.html HTTP/1.0\r\nHost: "..., 2048, 0, NULL, NULL) = 93
18:07:52.620474 writev(5, [{iov_base="HTTP/1.1 200 OK\r\nServer: nginx/1.12"..., iov_len=142}], 1) = 142
18:07:52.622846 close(5)                = 0


Edit:
I did some more traceing ... 400000 request from a remote host ... still no clue why this happens:

localhost:/~# strace -c -f /usr/sbin/nginx 
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 47.11    0.040309           0    400000           writev
 44.55    0.038115           0    400021           close
  3.11    0.002658           0    400002           accept4
  1.80    0.001538           0    400002           recvfrom
  1.74    0.001486           0    400007           epoll_ctl
  1.69    0.001450           0    400008           epoll_pwait

localhost:/~# strace -c -f ./test 
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 47.90    0.042760           0    400002           writev
 44.27    0.039518           0    400002           close
  3.13    0.002793           0    400002           accept4
  1.80    0.001610           0    400002           recvfrom
  1.57    0.001400           0    400005           epoll_pwait
  1.33    0.001183           0    400003           epoll_ctl


Edit:
As suggested in the comment-section I tested again regarding memory-alignment ...

localhost:~# strace -e raw=recvfrom,writev ./test 
recvfrom(0x5, 0x7c7364ae2ee0, 0x800, 0, 0, 0) = 0x5d
writev(0x5, 0x7c7364ae1650, 0x1)        = 0x69

localhost:~# strace -f -e raw=recvfrom,writev /usr/sbin/nginx 
[pid  2381] recvfrom(0x3, 0x2344ebbd6a0, 0x800, 0, 0, 0) = 0x5d
[pid  2381] writev(0x3, 0x7df116c39f90, 0x1) = 0x86


Edit:
As requested the config of nginx:

user nginx;
worker_processes 1;
working_directory /var/www;

events {
    accept_mutex off;
    worker_connections 4096;
}
http {
    include /etc/nginx/mime.types;
    server {
        listen 8081;
        server_name "10.0.0.110";
        return 200;
    }
}
like image 206
Xatian Avatar asked Apr 04 '18 10:04

Xatian


1 Answers

The short answer:
Sometimes being faster means being slower.


The long answer:

In a single-request setting (one request at a time) epoll_wait always blocks and forces a context-switch. The client (ab in this case) has to receive and process the response + send the next request before the server is woke up again and epoll_wait returns the pending socket. The speed in which the server generates the response does not change this behaviour since the client always waits for the server to respond before sending the next request.

In a multi-request setting it depends on the performance of the server if epoll_wait forces a context-switch or not. If the server-application is slower the chance of there being the next request already waiting is higher; in which case no context switch is necessary since epoll_wait can return immediately. A context-switch is expensive and may take longer than it takes the next request to arrive wasting time.

I realised this when I eliminated the chance of epoll_wait making a context-switch by setting the timeout to zero. This forces epoll_wait to return immediately even if there is no pending request (so it never waits and therefore there is no forced context-switch). In this case the test-application way outperforms nginx even in a multi-request setting.

I further confirmed my theory by:

  • removing work from nginx (disabling the access-log) which made it slower
  • adding work to the test-application (spin-loop) which made it faster

in a multi-request setting.


For this answer I retook all measurements. All tests were ran on an updated ubuntu-box with 8 cores (so more than 5) with the server and the client running on this machine at the same time. The test-application is unchanged.

  • gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
  • nginx version: nginx/1.14.0 (Ubuntu)

I wrote a small script to come up with the request/second number below that runs ab with 60.000 requests 100 times picking the fastest outcome.

maxRequests=0
for ((i = 0; i < 100; ++i)); do
    requests=$(ab -n 60000 -c (1 or 4) http://127.0.0.1:8081/index.html 2>&1 | grep "Requests per second" | cut -d" " -f7)
    requests=${requests%.*}
    maxRequests=$(( maxRequests > requests ? maxRequests : requests ))
done
echo ${maxRequests}



Here again the base-values with the same settings like in the question.

Requests per second: 13507 (test-application, 1c)
Requests per second: 27648 (test-application, 4c)
Requests per second: 11755 (nginx, 1c)
Requests per second: 31446 (nginx, 4c)

To make nginx faster (and therefore slower with 4c) I disabled the access-log.

Requests per second: 12028 (nginx, 1c, no access-log)
Requests per second: 28976 (nginx, 4c, no access-log)

Here is the result if the test-application never sleeps by settings the epool_wait - time to zero.
Whats interesting is the difference in the 1c setting showing how much it costs to wake up the test-application when a new request arrives.

Requests per second: 20079 (test-application, 1c, spinning)
Requests per second: 34522 (test-application, 4c, spinning)

For these last measurements I added some work to the test-application making it artificially slower to increase the chance of preventing a context-switch. I added the code below to the end of the inner for-loop (after the close) and varied the initial value of j to get to the different results

uint8_t j = 50;
while (--j != 0) {
    uint8_t i = 0;
    while (--i != 0)
        asm("");
}
Requests per second: 12910 (test-application, 1c, j=50)
Requests per second: 12126 (test-application, 1c, j=100)
Requests per second: 11634 (test-application, 1c, j=150)
Requests per second: 11020 (test-application, 1c, j=200)
Requests per second: 10235 (test-application, 1c, j=250)
Requests per second: 27447 (test-application, 4c, j=25)
Requests per second: 29464 (test-application, 4c, j=50)
Requests per second: 31334 (test-application, 4c, j=75)
Requests per second: 32079 (test-application, 4c, j=100)
Requests per second: 33510 (test-application, 4c, j=125)
Requests per second: 34241 (test-application, 4c, j=150)
Requests per second: 34189 (test-application, 4c, j=175)
Requests per second: 33855 (test-application, 4c, j=200)
Requests per second: 33328 (test-application, 4c, j=250)
like image 180
Xatian Avatar answered Oct 31 '22 17:10

Xatian