I want an extremely efficient TCP client to send google protocol buffer messages. I have been using the Netty library to develop a server/client.
In tests the server seems to be able to handle up to 500k transactions per second, without to many problems, but the client tends to peak around 180k transactions per second.
I have based my client on the examples provided in the Netty documentation, but the difference is I just want to send the message and forget, I don't want a response (which most of the examples get). Is there anyway to optimize my client, so that I can achieve a higher TPS ?
Should my client maintain multiple channels, or should I be able to achieve a higher throughput than this with a single channel?
It's also fast and performant. Netty's asynchronous, non-blocking I/O model is designed for highly scalable architectures and may allow for higher throughput than an analogous blocking model.
Your Netty or Play app should now be able to handle over 1000 concurrent connections (or more, depending on what limits you set above).
Netty is a NIO client server framework which enables quick and easy development of network applications such as protocol servers and clients. It greatly simplifies and streamlines network programming such as TCP and UDP socket server.
If such a host issues a CLOSE call while received data is still pending in TCP, or if new data is received after CLOSE is called, its TCP SHOULD send a RST to show that data was lost. The RFC allows/encourages a RST to be sent in this scenario. Note that Netty does not implement the TCP protocol.
1) If the client is only interested in sending, not in receiving, you can always disable reading from channel like below
channel.setReadable(false);
2) You can increase the throughput very easily by having multiple client channels per client, and also it can scale too.
3) and you can do following tweaks to improve the performance in general (for read/ write)
Its better to have a SEDA like pipline by adding a EXecutionHandler with OrderdMemoryAwareThreadPoolExecutor, (with min, max channel memory with optimal value)
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
@Override
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(
executionHandler1,//sharable
new MessageDecoderHandler(),
new MessageEncoderHandler(),
executionHandler2,//sharable
new BusinessLogicHandler1(),
new BusinessLogicHandler2());
}
});
Setting the writeBufferHighWaterMark of the channel to optimal value (Make sure that setting a big value will not create congestion)
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
Setting the SO_READ, SO_WRITE buffer size
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
Enabling the TCP No delay
bootstrap.setOption("tcpNoDelay", true);
I am not sure if "tcpNoDelay" helps to improve the throughput. Delay is there to improve the performance. None the less, I tried it and saw that the throughput actually fell more than 90%.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With