Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TcpListener is queuing connections faster than I can clear them

As I understand it, TcpListener will queue connections once you call Start(). Each time you call AcceptTcpClient (or BeginAcceptTcpClient), it will dequeue one item from the queue.

If we load test our TcpListener app by sending 1,000 connections to it at once, the queue builds far faster than we can clear it, leading (eventually) to timeouts from the client because it didn't get a response because its connection was still in the queue. However, the server doesn't appear to be under much pressure, our app isn't consuming much CPU time and the other monitored resources on the machine aren't breaking a sweat. It feels like we're not running efficiently enough right now.

We're calling BeginAcceptTcpListener and then immediately handing over to a ThreadPool thread to actually do the work, then calling BeginAcceptTcpClient again. The work involved doesn't seem to put any pressure on the machine, it's basically just a 3 second sleep followed by a dictionary lookup and then a 100 byte write to the TcpClient's stream.

Here's the TcpListener code we're using:

    // Thread signal.
    private static ManualResetEvent tcpClientConnected = new ManualResetEvent(false);

    public void DoBeginAcceptTcpClient(TcpListener listener)
    {
        // Set the event to nonsignaled state.
        tcpClientConnected.Reset();

        listener.BeginAcceptTcpClient(
            new AsyncCallback(DoAcceptTcpClientCallback),
            listener);

        // Wait for signal
        tcpClientConnected.WaitOne();
    }

    public void DoAcceptTcpClientCallback(IAsyncResult ar)
    {
        // Get the listener that handles the client request, and the TcpClient
        TcpListener listener = (TcpListener)ar.AsyncState;
        TcpClient client = listener.EndAcceptTcpClient(ar);

        if (inProduction)
            ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client, serverCertificate));  // With SSL
        else
            ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client));  // Without SSL

        // Signal the calling thread to continue.
        tcpClientConnected.Set();
    }

    public void Start()
    {
        currentHandledRequests = 0;
        tcpListener = new TcpListener(IPAddress.Any, 10000);
        try
        {
            tcpListener.Start();

            while (true)
                DoBeginAcceptTcpClient(tcpListener);
        }
        catch (SocketException)
        {
            // The TcpListener is shutting down, exit gracefully
            CheckBuffer();
            return;
        }
    }

I'm assuming the answer will be related to using Sockets instead of TcpListener, or at least using TcpListener.AcceptSocket, but I wondered how we'd go about doing that?

One idea we had was to call AcceptTcpClient and immediately Enqueue the TcpClient into one of multiple Queue<TcpClient> objects. That way, we could poll those queues on separate threads (one queue per thread), without running into monitors that might block the thread while waiting for other Dequeue operations. Each queue thread could then use ThreadPool.QueueUserWorkItem to have the work done in a ThreadPool thread and then move onto dequeuing the next TcpClient in its queue. Would you recommend this approach, or is our problem that we're using TcpListener and no amount of rapid dequeueing is going to fix that?

like image 672
Matt Brindley Avatar asked Apr 30 '10 14:04

Matt Brindley


4 Answers

I've whipped up some code that uses sockets directly, but I lack the means of performing a load test with 1000 clients. Could you please try to test how this code compares to your current solution? I'd be very interested in the results as I'm building a server that needs to accept a lot of connections as well right now.

static WaitCallback handleTcpRequest = new WaitCallback(HandleTcpRequest);

static void Main()
{
    var e = new SocketAsyncEventArgs();
    e.Completed += new EventHandler<SocketAsyncEventArgs>(e_Completed);

    var socket = new Socket(
        AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
    socket.Bind(new IPEndPoint(IPAddress.Loopback, 8181));
    socket.Listen((int)SocketOptionName.MaxConnections);
    socket.AcceptAsync(e);

    Console.WriteLine("--ready--");
    Console.ReadLine();
    socket.Close();
}

static void e_Completed(object sender, SocketAsyncEventArgs e)
{
    var socket = (Socket)sender;
    ThreadPool.QueueUserWorkItem(handleTcpRequest, e.AcceptSocket);
    e.AcceptSocket = null;
    socket.AcceptAsync(e);
}

static void HandleTcpRequest(object state)
{
    var socket = (Socket)state;
    Thread.Sleep(100); // do work
    socket.Close();
}
like image 176
dtb Avatar answered Nov 15 '22 18:11

dtb


Unless I'm missing something, you're calling BeingAcceptTcpClient, which is asynchronous, but then you're calling WaitOne() to wait until the asynchronous code finishes , which effectively makes the process synchronous. Your code can only accept one client at a time. Or am I totally crazy? At the very least, this seems like a lot of context switching for nothing.

like image 22
Jonathan Beerhalter Avatar answered Nov 15 '22 17:11

Jonathan Beerhalter


It was alluded to, in the other questions, but I would suggest in your tcpListener.Start() method, use the overload that allows you to set the backlog to a number higher than the maximum number of connections you're expecting at one time:


    public void Start()
    {
        currentHandledRequests = 0;
        tcpListener = new TcpListener(IPAddress.Any, 10000);
        try
        {
            tcpListener.Start(1100);  // This is the backlog parameter

            while (true)
                DoBeginAcceptTcpClient(tcpListener);
        }
        catch (SocketException)
        {
            // The TcpListener is shutting down, exit gracefully
            CheckBuffer();
            return;
        }
    }

Basically, this option sets how many "pending" TCP connections are allowed that are waiting for an Accept to be called. If you are not accepting connections fast enough, and this backlog fills up, the TCP connections will be automatically rejected, and you won't even get a chance to process them.

As others have mentioned, the other possibility is speeding up how fast you process the incoming connections. You still, however, should set the backlog to a higher value, even if you can speed up the accept time.

like image 35
Steve Wranovsky Avatar answered Nov 15 '22 19:11

Steve Wranovsky


The first thing to ask yourself is "is 1000 connections all at once reasonable". Personally I think it's unlikely that you will get into that situation. More likely you have 1000 connections occurring over a short period of time.

I have a TCP test program that I use to test my server framework, it can do things like X connections in total in batches of Y with a gap of Z ms between each batch; which I personally find is more real world than 'vast number all at once'. It's free, it might help, you can get it from here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html

As others have said, increase the listen backlog, process the connections faster, use asynchronous accepts if possible...

like image 36
Len Holgate Avatar answered Nov 15 '22 19:11

Len Holgate