Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Lot of UDP requests lost in UDP server with Netty

I wrote a simple UDP Server with Netty that simply prints out in logs the messages (frames) received. To do that, I created a simple frame decoder decoder and a simple message handler. I also have a client that can send multiple requests sequentially and/or in parallel.

When I configure my client tester to send for example few hundred of requests sequentially with a small delay between them, my server written with Netty receives them all properly. But at the moment I increase the number of simultaneous requests in my client (100 for example) coupled with sequential ones and few repeats, my server starts loosing many requests. When I send 50000 requests for example, my server only receives about 49000 when only using the simple ChannelHandler that prints out the received message.

And when I add the simple frame decoder (that prints out the frame and copies it into another buffer) in front of this handler, the server only handles half of the requests!!

I noticed that no matter the number of workers I specify to the created NioDatagramChannelFactory, there is always one and only one thread that handles the requests (I am using the recommended Executors.newCachedThreadPool() as the other parameter).

I also created another similar simple UDP Server based on the DatagramSocket coming with the JDK and it handles every requests perfectly with 0 (zero) lost!! When I send 50000 requests in my client (with 1000 threads for example), I received 50000 requests in my server.

Am I doing something wrong while configuring my UDP server using Netty? Or maybe Netty is simply not designed to support such load?? Why is there only one thread used by the given Cached Thread Pool (I noticed that only one thread and always the same is used by looking in JMX jconsole and in by checking the thread name in the output logs)? I think if more threads where used as expected, the server would be able to easily handle such load because I can do it without any problem when not using Netty!

See my initialization code below:

...

lChannelfactory = new NioDatagramChannelFactory( Executors.newCachedThreadPool(), nbrWorkers );
lBootstrap = new ConnectionlessBootstrap( lChannelfactory );

lBootstrap.setPipelineFactory( new ChannelPipelineFactory() {
    @Override
    public ChannelPipeline getPipeline()
    {
        ChannelPipeline lChannelPipeline = Channels.pipeline();
        lChannelPipeline.addLast( "Simple UDP Frame Dump DECODER", new SimpleUDPPacketDumpDecoder( null ) );            
        lChannelPipeline.addLast( "Simple UDP Frame Dump HANDLER", new SimpleUDPPacketDumpChannelHandler( lOuterFrameStatsCollector ) );            
        return lChannelPipeline;
    }
} );

bindChannel = lBootstrap.bind( socketAddress );

...

And the content of the decode() method in my decoder:

protected Object decode(ChannelHandlerContext iCtx, Channel iChannel, ChannelBuffer iBuffer) throws Exception
{
    ChannelBuffer lDuplicatedChannelBuffer = null;
    sLogger.debug( "Decode method called." );

    if ( iBuffer.readableBytes() < 8 ) return null;
    if ( outerFrameStatsCollector != null ) outerFrameStatsCollector.incrementNbrRequests();

    if ( iBuffer.readable() ) 
    {        
        sLogger.debug( convertToAsciiHex( iBuffer.array(), iBuffer.readableBytes() ) );                     
        lDuplicatedChannelBuffer = ChannelBuffers.dynamicBuffer( iBuffer.readableBytes() );            
        iBuffer.readBytes( lDuplicatedChannelBuffer );
    }

    return lDuplicatedChannelBuffer;
}

And the content of the messageReceived() method in my handler:

public void messageReceived(final ChannelHandlerContext iChannelHandlerContext, final MessageEvent iMessageEvent) throws Exception
{
    ChannelBuffer lMessageBuffer = (ChannelBuffer) iMessageEvent.getMessage();
    if ( outerFrameStatsCollector != null ) outerFrameStatsCollector.incrementNbrRequests();

    if ( lMessageBuffer.readable() ) 
    {        
        sLogger.debug( convertToAsciiHex( lMessageBuffer.array(), lMessageBuffer.readableBytes() ) );            
        lMessageBuffer.discardReadBytes();
    }
}
like image 309
The4Summers Avatar asked Mar 09 '12 16:03

The4Summers


People also ask

How many UDP packets are lost?

Packet loss due to errors on the link is very low, when links are working properly. Less than 0.01% is not unusual. Packet loss due to congestion obviously depends on how busy the link is. If there is spare capacity along the entire path, this number will be 0%.

Does UDP check for packet loss?

Real-time applications mainly use UDP. However, a packet being lost without detection makes UDP an unreliable protocol. Packet loss is one of the major challenges in networking when using UDP. Additionally, as there's no error control mechanism in UDP, it simply discards the packet as soon as an error is detected.


1 Answers

You have not properly configured the ConnectionlessBootstrap instance.

  1. You have to configure followings with optimum values.

    SO_SNDBUF size, SO_RCVBUF size and a ReceiveBufferSizePredictorFactory

    lBootstrap.setOption("sendBufferSize", 1048576);
    
    lBootstrap.setOption("receiveBufferSize", 1048576);
    
    lBootstrap.setOption("receiveBufferSizePredictorFactory", 
     new AdaptiveReceiveBufferSizePredictorFactory(MIN_SIZE, INITIAL_SIZE, MAX_SIZE));
    

    check DefaultNioDatagramChannelConfig class for more details.

  2. The pipeline is doing everything using the Netty work thread. If worker thread is overloaded, it will delay the selector event loop execution and there will be a bottleneck in reading/writing the channel. You have to add a execution handler as following in the pipeline. It will free the worker thread to do its own work.

    ChannelPipeline lChannelPipeline = Channels.pipeline();
    
    lChannelPipeline.addFirst("execution-handler", new ExecutionHandler(
      new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));
    
    //add rest of the handlers here
    
like image 199
Jestan Nirojan Avatar answered Sep 28 '22 09:09

Jestan Nirojan