Apache HttpClient 4.3b2, HttpCore 4.3.
I use PoolingHttpClientConnectionManager
to manage 5 connections concurrently:
PoolingHttpClientConnectionManager connectionManager;
HttpClient httpclient;
connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setDefaultMaxPerRoute(5);
httpclient = HttpClientBuilder.create().setConnectionManager(connectionManager).build();
Server have 5 seconds keep-alive time.
When server initiate close connection process it is staying in FIN_WAIT2 state until I'll execute connectionManager.shutdown()
or connectionManager.closeExpiredConnections()
or connectionManager.closeIdleConnections(5, TimeUnit.SECONDS)
manually. Server waits FIN package. How can I automatically close connections on client side after server start closing process?
When I do requests from Chrome browser, server stay in TIME_WAIT state when it try to close connection by keep-alive (FIN_WAIT2 state changes very quickly). How can I get the same behavior with Apache HttpClient?
The Keep-Alive duration determines whether or not the connection is idle, in fact - if the Keep-alive strategy says to keep connections alive for 10 seconds, and we receive responses from the server every 2 seconds, the connection will be kept alive for 10 seconds after the last successful response.
If you are processing HTTP responses manually instead of using a response handler, you need to close all the http connections by yourself.
When you set up connection pooling, instead of closing the client HTTP connection after use, CICS keeps the connection open and stores it in a pool in a dormant state. The dormant connection can be reused by the same application or by another application that connects to the same host and port.
Go to Monitoring and Tuning > Performance Viewer > Current activity , select server, then in PMI viewer select Settings > Log to define logging period and format. And in Modules > Thread pools > WebContainer you can view current counter values. This is rather for short term monitoring, than for constant logging.
This problem is explained in details in HttpClient tutorial
One of the major shortcomings of the classic blocking I/O model is that the network socket can react to I/O events only when blocked in an I/O operation. When a connection is released back to the manager, it can be kept alive however it is unable to monitor the status of the socket and react to any I/O events. If the connection gets closed on the server side, the client side connection is unable to detect the change in the connection state (and react appropriately by closing the socket on its end).
If you want expired connections to get pro-actively evicted from the connection pool there is no way around running an additional thread enforcing a connection eviction policy that suits your application.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With