I have been doing load testing for my REST APIs using JMeter.
I am getting the following error when hit with 1000 concurrent users:
Too many open files. Stacktrace follows:
java.net.SocketException: Too many open files
at java.net.Socket.createImpl(Socket.java:397)
at java.net.Socket.getImpl(Socket.java:460)
at java.net.Socket.setSoTimeout(Socket.java:1017)
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:126)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:640)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:479)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at groovyx.net.http.HTTPBuilder.doRequest(HTTPBuilder.java:476)
at groovyx.net.http.HTTPBuilder.doRequest(HTTPBuilder.java:441)
at groovyx.net.http.HTTPBuilder.request(HTTPBuilder.java:390)
My server tries to hit another REST API to get the data and process it and finally return a JSON response.
How do I increase the number of open files in Linux?
Following is the call I am making to another server
Map getResponse(Map data, String url){
HTTPBuilder httpBuilder = new HTTPBuilder(url);
httpBuilder.request(Method.POST, JSON) {
headers.'Authorization' = AppConfig.config.appKey;
headers.'Content-type' = 'application/json'
body = data
response.success = { resp, reader ->
return reader as Map;
}
response.failure = { response, reader ->
return null
}
}
}
You have certainly open the maximum number of open file/sockets. The maximum number of open files or sockets on Linux machines is 1024. by default. You need to change that. You can refer this java.net.SocketException Too many open files
You can use below query to check from your terminal to get maximum number of allowed open files
ulimit -n
From here:
What's happening is that the underlying sockets aren't being closed, and eventually the JVM bumps into the system's per-process limit on open file descriptors.
The right solution would be to make the sockets close at the Right Time (which I guess is when, or shortly after, the server has closed its end of the connection). That seems hard with HttpURLConnection. It's all very confused:
disconnect() just seems to close it immediately -- or not; the Javadocs are intentionally vague about what it actually does, and especially when it does it.
close() might be the right choice. The Evaluation section of Java bug #4147525 says: "... call close() on the input and/or output stream. This will correctly result in the underlying socket being closed when you aren't doing keepalive connections and will correctly cache and reuse keepalive connections (and which will timeout and close themselves after a short time anyway)."
But maybe not. Bug #4142971 says: "Calling the close() methods has no effect one way or the other on whether the underlying HTTP connection is persistent."
Failing a clear answer, perhaps the HttpURLConnection objects could be added to a list, and all disconnected at once at the end of the test run. That'd still limit the total size of the run, but at least the lost descriptors wouldn't accumulate between runs.
Maybe the real answer is to give up on HttpURLConnection, and instead use the HTTP Client from Jakarta Commons. Someone suggested that in connection with a different problem (bug
#4143518
).
"java.net.SocketException: Too many files open"can be seen any Java Server application e.g. Tomcat, Weblogic, WebSphere etc, with client connecting and disconnecting frequently.
You can find out how to solve "java.net.SocketException: Too many files open" here
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With