The implicit question is: If Linux blocks the send()
call when the socket's send buffer is full, why should there be any lost packets?
More details: I wrote a little utility in C to send UDP packets as fast as possible to a unicast address and port. I send a UDP payload of 1450 bytes each time, and the first bytes are a counter which increments by 1 for every packet. I run it on a Fedora 20 inside VirtualBox on a desktop PC with a 1Gb nic (=quite slow).
Then I wrote a little utility to read UDP packets from a given port which checks the packet's counter against its own counter and prints a message if they are different (i.e. 1 or more packets have been lost). I run it on a Fedora 20 bi-xeon server with a 1Gb ethernet nic (=super fast). It does show many lost packets.
Both machines are on a local network. I don't know exactly the number of hops between them, but I don't think there are more than 2 routers between them.
Things I tried:
send()
. If I set a delay of 1ms, then no packets are lost any more. A delay of 100us will start losing packets.setsockopt()
. That does not make any difference...Please enlighten me!
For UDP the SO_SNDBUF
socket option only limits the size of the datagram you can send. There is no explicit throttling send socket buffer as with TCP. There is, of course, in-kernel queuing of frames to the network card.
In other words, send(2)
might drop your datagram without returning an error (check out description of ENOBUFS
at the bottom of the manual page).
Then the packet might be dropped pretty much anywhere on the path:
From what you said though, it sounds very probable that the VM is not able to send the packets at a high rate. Sniff the wire with tcpdump(1)
or wireshark(1)
as close to the source as possible, and check your sequence numbers - that would tell you if it's the sender that is to blame.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With