Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java Sockets: No buffer space available (maximum connections reached?)

I have a big problem. I have developped a client-server application. A client thread sends a serialized object to the server and the server sends back a serialized object. Currently I'm using one server and 10 client threads and after about 30 seconds I get the error message from each client thread (IOException):

No buffer space available (maximum connections reached?): connect

If I'm looking in netstat then I see that there are a lot of connections created and it is growing and growing and all connections are in TIME_WAIT state.

I don't know why. I close the sockets in the server and in the clients everytime in a finally block. Here is some code:

In the server I have in socketHandlerThread:

ServerSocket serverSocket = new ServerSocket(port);
serverSocket.setSoTimeout(5000);
while(true) {
       Socket socket = serverSocket.accept();
}

The new socket is then put on a LinkedBlockingQueue and a worker thread takes the socket and makes the following:

try {
      outputStream = new ObjectOutputStream(new BufferedOutputStream(socket.getOutputStream()));
      outputStream.flush();
      inStream = new ObjectInputStream(new BufferedInputStream(socket.getInputStream()));
      ClientRequest clientRequest = (ClientRequest) inStream.readObject();
...
      outputStream.writeObject(serverResponse);
      outputStream.flush();
} catch....
} finally {
                if (inStream != null) {
                    inStream.close();
                }
                if (outputStream != null) {
                    outputStream.close();
                }
                if (socket != null) {
                    socket.close();
                }
}

On the client side I have the following code:

    try {
        socket = new Socket(host, port);
        outputStream = new ObjectOutputStream(new BufferedOutputStream(socket.getOutputStream()));
        outputStream.flush();
        inputStream = new ObjectInputStream(new BufferedInputStream(socket.getInputStream()));
        outputStream.writeObject(request);
        outputStream.flush();
        Object serverResponse = inputStream.readObject();
   } catch....
   } finally {
            if (inputStream != null) {
                inputStream.close();
            }
            if (outputStream != null) {
                outputStream.close();
            }
            if (socket != null) {
                socket.close();
            }
   }

Can somebody help? I really don't know what mistake I made. I seems that the sockets get no closed but I don't know why.

Could it be the problem that I put the sockets on a queue on the server side so that the socket is somehow copied?

Edit: If I put the client and the server each on a different Amazon EC2 classic instance running Linux AMI then it works. Could it be a problem with Windows or is the problem simply that I was running the Clients and servers on the same machine (my local pc)?

Does somebody see a bug in my code?

Edit2: As said above on EC2 instances it works but if I use netstat it shows still a lot of lines saying TIME_WAIT.

Here are screenshots:

https://drive.google.com/file/d/0BzERdJrwWrNCWjhReGhpR2FBMUU/view?usp=sharing

https://drive.google.com/file/d/0BzERdJrwWrNCOG1TWGo5YmxlaTg/view?usp=sharing

First screenshot is from windows. "WARTEND" means "WAITING" (it is german).

The second screenshot is from Amazon EC2 (to the left the client machine, to the right the server machine).

like image 502
machinery Avatar asked Oct 20 '22 00:10

machinery


1 Answers

TIME-WAIT is entered after the connection is closed at both ends. It lasts for a couple of minutes, for data integrity reasons.

If the buffer problem is due to TIME-WAIT states at the server, the solution is to make the server be the peer that first receives the close. That will shift the TIME-WAIT state to the client, where it is benign.

You can do that by putting your server-side request handling into a loop, so that it can handle multiple requests per connection, and so that the server only closes the socket when it reaches end of stream on it.

for (;;)
{
    try
    {
        ClientRequest clientRequest = (ClientRequest) inStream.readObject();
        ...
        outputStream.writeObject(serverResponse);
        outputStream.flush();
    }
    catch (EOFException exc)
    {
        break;
    }
}

If you then implement connection-pooling at the client, you will massively reduce the number of connections, which will further reduce the incidence of the buffer problem.

like image 50
user207421 Avatar answered Oct 22 '22 15:10

user207421