I have tried to create a simple program in different languages (C#, Java, C++, PHP) to connect to a server and all behaved in the same way. So I believe this problem is more of a OS level thing.
Basically I want the program to connect to the server with a TCP socket and send 1 byte and then close the socket. This needs to done thousands of times per second and sustained over a period of time. This is for the purposes of benchmarking the server.
So far, after a few thousands client sockets, the system comes to a complete halt. It can only start creating sockets after a minute or so of cool down. I have made sure that I have closed each socket after transmission.
Now, this kind of problem is familiar with servers like Apache where utilities (like ab/siege) are advised to benchmark Apache using the keep-alive protocol. I.e., create small number of TCP connections but make multiple requests through them for benchmark purposes. This is however not possible in our case as our proprietary server does not serve HTTP and does not support the HTTP 1.1 keep-alive model.
So how can this be achieved? I have checked the following limits
ulimit
is set to a very high numberTCP TIME_WAIT
is eliminated by setting /proc/sys/net/ipv4/tcp_rw_recycle
and /proc/sys/net/ipv4/tcp_rw_reuse
to 1. (I have indeed confirmed with netstat
there are no TIME_WAIT
sockets)PS. This is NOT a server side limitation. We tested this by buying another box and running the same client code on it when the first client box refused to make new sockets. The server handled it fine. We don't want to buy 5-10 boxes and rotate between them to overcome this problem.
OS: Fedora 10 Linux 2.6.24-23-xen #1 SMP
Maximum number of sockets. For most socket interfaces, the maximum number of sockets allowed per each connection between an application and the TCP/IP sockets interface is 65535.
There can only be one listening socket for the specified network interface and port, but that socket can be shared between several applications.
@premktiw: Yes, multiple client sockets can be bound to the same local IP/port pair at the same time, if they are connected to different server IP/Port pairs so the tuples of local+remote pairs are unique.
Old joke: Man goes to doctor, says "Doctor, it hurts when I do this," twisting his arm into a strange position.
Doctor replies, "Well, don't do that!"
Look, what you're doing is a very unnatural process. Establishing a TCP connection requires a handshake, transmitting bytes far in excess of the one byte per message. Setup and teardown time are going to be significant. It's very probable that what you're doing is using up kernel resources associated with that handshake; sure enough, if you then let it alone and stop slapping it, it eventually catches up.
So, what are you really trying to measure? What are you really trying to do? If you're really trying to send a single byte at a time -- gods forbid -- at least think about using udp; there's no awful setup/teardown. It's still immensely inefficient compared to the overhead -- even a UDP packet requires something like 20 bytes of framing -- but it's better.
Take a look at Richard Jones' article, A Million-user Comet Application with Mochiweb, Part 3. It's about implementing a Comet app in Erlang, but the section "Turning it up to 1 Million" describes how he benchmarked his server; it opens with the statement "Creating a million tcp connections from one host is non-trivial." That should give you some idea of what you're in for.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With