Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java dropping half of UDP packets

I have a simple client/server setup. The server is in C and the client that is querying the server is Java.

My problem is that, when I send bandwidth-intensive data over the connection, such as Video frames, it drops up to half the packets. I make sure that I properly fragment the udp packets on the server side (udp has a max payload length of 2^16). I verified that the server is sending the packets (printf the result of sendto()). But java doesn't seem to be getting half the data.

Furthermore, when I switch to TCP, all the video frames get through but the latency starts to build up, adding several seconds delay after a few seconds of runtime.

Is there anything obvious that I'm missing? I just can't seem to figure this out.

like image 466
Andrew Klofas Avatar asked Mar 14 '10 01:03

Andrew Klofas


2 Answers

Get a network tool like Wireshark so you can see what is happening on the wire.

UDP makes no retransmission attempts, so if a packet is dropped somewhere, it is up to the program to deal with the loss. TCP will work hard to deliver all packets to the program in order, discarding dups and requesting lost packets on its own. If you are seeing high latency, I'd bet you'll see a lot of packet loss with TCP as well, which will show up as retransmissions from the server. If you don't see TCP retransmissions, perhaps the client isn't handling the data fast enough to keep up.

like image 124
Fred Avatar answered Sep 30 '22 05:09

Fred


Any UDP based application protocol will inevitably be susceptible to packet loss, reordering and (in some circumstances) duplicates. The "U" in UDP could stands for "Unreliable" as in Unreliable Datagram Protocol. (OK, it really stands for "User" ... but it is certainly a good way to remember UDP's characteristics.)

UDP packet losses typically occur because your traffic is exceeding the buffering capacity of one or more of the "hops" between the server and client. When this happens, packets are dropped ... and since you are using UDP, there is no transport protocol-level notification that this is occurring.

If you use UDP in an application, the application needs to take account of UDP's unreliable nature, implementing its own mechanisms for dealing with dropped and out-of-order packets and for doing its own flow control. (An application that blasts out UDP packets with no thought to the effect that this may have on an already overloaded network is a bad network citizen.)

(In the TCP case, packets are probably being dropped as well, but TCP is detecting and resending the dropped packets, and the TCP flow control mechanism is kicking in to slow down the rate of data transmission. The net result is "latency".)

EDIT - based on the OP's comment, the cause of his problem was that the client was not "listening" for a period, causing the packets to (presumably) be dropped by the client's OS. The way to address this is to:

  1. use a dedicated Java thread that just reads the packets and queues them for processing, and

  2. increase the size of the kernel packet queue for the socket.

But even when you take these measures you can still get packets dropped. For example, if the machine is overloaded, the application may not get execution time-slices frequently enough to read and queue all packets before the kernel has to drop them.

EDIT 2 - There is some debate about whether UDP is susceptible to duplicates. It is certainly true that UDP has no innate duplicate detection or prevention. But it is also true that the IP packet routing fabric that is the internet is unlikely to spontaneously duplicate packets. So duplicates, if they do occur, are likely to occur because the sender has decided to resend a UDP packet. Thus, to my mind while UDP is susceptible to problems with duplicates, it does not cause them per se ... unless there is a bug in the OS protocol stack or in the IP fabric.

like image 29
Stephen C Avatar answered Sep 30 '22 06:09

Stephen C