Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Handling dropped TCP packets in C#

Tags:

c#

tcp

sockets

I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.

I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.

First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?

Secondly, how can I get round this? Receiving all of the 20000 strings is vital.

Socket receiving code:

private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
    byte[] recvBuffer = new byte[1024];
    int bytesRead = 0;

    try
    {
        bytesRead = socket.Receive(recvBuffer);
    }
    catch (SocketException e)
    {
    if (! socket.Connected)
    {
        return;
    }
    }

    string input = encoding.GetString(recvBuffer, 0, bytesRead);
    CountStringsIn(input);
}

Socket sending code:

private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
like image 904
Nosrama Avatar asked Aug 26 '09 22:08

Nosrama


1 Answers

If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.

What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.

The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.

like image 108
Jeff Tucker Avatar answered Sep 20 '22 10:09

Jeff Tucker