Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Handling more than 1024 file descriptors, in C on Linux

I am working on a threaded network server using epoll (edge triggered) and threads and I'm using httperf to benchmark my server.

So far, it's performing really well or almost exactly at the rate the requests are being sent. Until the 1024 barrier, where everything slows down to around 30 requests/second.

Running on Ubuntu 9.04 64-bit.

I've already tried:

  • Increasing the ulimit number of file descriptors, successfully. It just doesn't improve the performance above 1024 concurrent connections.

    andri@filefridge:~/Dropbox/School/Group 452/Code/server$ ulimit -n
    20000

I am pretty sure that this slow-down is happening in the operating system as it happens before the event is sent to epoll (and yes, I've also increased the limit in epoll).

I need to benchmark how many concurrent connections my program can handle until it starts to slow down (without the operating system interfering).

How do I get my program to run with more than 1024 file descriptors?

This limit is probably there for a reason, but for benchmarking purposes, I need it gone.

Update

Thanks for all your answers but I think I've found the culprit. After redefining __FD_SETSIZE in my program everything started to move a lot faster. Of course ulimit also needs to be raised, but without __FD_SETSIZE my program never takes advantage of it.

like image 954
Andrioid Avatar asked May 11 '09 15:05

Andrioid


1 Answers

Thanks for all your answers but I think I've found the culprit. After redefining __FD_SETSIZE in my program everything started to move a lot faster. Of course ulimit also needs to be raised, but without __FD_SETSIZE my program never takes advantage of it.

like image 138
Andrioid Avatar answered Sep 26 '22 12:09

Andrioid