First, thanks all the Netty contributors for the great library. I have been happily using it for several weeks.
Recently, I started to load test my system but now I'm experiencing some scalability problem with Netty. I tried to fork as many simultaneous Netty clients as possible to connect to a Netty server. For small number of clients (<50), the system just works fine. However, for large number of clients (>100), I find the client side always prompts the "ClosedChannelException":
java.nio.channels.ClosedChannelException at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$1.operationComplete(NioClientSocketPipelineSink.java:157) at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381) at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:367) at org.jboss.netty.channel.DefaultChannelFuture.setSuccess(DefaultChannelFuture.java:316) at org.jboss.netty.channel.AbstractChannel$ChannelCloseFuture.setClosed(AbstractChannel.java:351) at org.jboss.netty.channel.AbstractChannel.setClosed(AbstractChannel.java:188) at org.jboss.netty.channel.socket.nio.NioSocketChannel.setClosed(NioSocketChannel.java:146) at org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:592) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.close(NioClientSocketPipelineSink.java:415) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:379) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:299) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722)
I am wondering how to make Netty support more simultaneous client connections, such as 10K. I am using the newest version of Netty. Following is the testing scenario:
Each client sends a four letter string to the server and the server handler does nothing upon receiving the string. Each of the server and the clients is running on a high performance machine with eight-core and 16GB memory. The two machines are connected by a Gigabyte network.
Do you have any hints?
If you're using pure netty, just use the code above and the 50 concurrent connections limit will vanish immediately! Also worth noting: this issue exists in Netty 3. x, but apparently Netty 4. x sets a better default than 50 on all OS's, so upgrading Netty versions may be another solution.
Single-threaded Concurrency is Still New GroundHigh performance IO toolkits such as Netty, Vert. x and Undertow use a single-threaded server design.
If such a host issues a CLOSE call while received data is still pending in TCP, or if new data is received after CLOSE is called, its TCP SHOULD send a RST to show that data was lost. The RFC allows/encourages a RST to be sent in this scenario. Note that Netty does not implement the TCP protocol.
It supports SSL/TLS, has both blocking and non-blocking unified APIs, and a flexible threading model. It's also fast and performant. Netty's asynchronous, non-blocking I/O model is designed for highly scalable architectures and may allow for higher throughput than an analogous blocking model.
1) You can tweak the connectTimeout in the client bootstrap to make sure there is no network/server issues
clientBootStrap.setOption("connectTimeoutMillis", optimumTimout);
2) By setting the backlog value in the Netty server, you can increase the queue of incoming connection size, so clients will have better chance of connecting to the server
serverBootStrap.setOption("backlog", 1000);
3) You have said that your application is creating many connections simultaneously, Client Boss thread may lag behind, if the application is connecting too fast.
Netty 3.2.7 Final allows to set more than one Client Boss thread in NioClientSocketChannelFactory constructor to avoid this issue.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With