Consider 100 bytes sent across a socket. With a TCP socket, if I call recv()
with a length of 50, I get the first 50 bytes, and if I call it again, I get the second 50 bytes. With a UDP socket, if I call recvfrom()
with a length of 50, I get the first 50 bytes, but then have no way of retrieving the second 50 — subsequent calls to recvfrom()
block until the next datagram is received.
Does this mean that, if I want to receive an entire UDP datagram, regardless of size, I have to allocate a 64k buffer (the maximum allowed by UDP)? If I connect()
my UDP socket, does this change the behavior? Or does a protocol operating over UDP generally entail a known maximum packet size that should be used for a buffer?
Most sane UDP-based protocols don't go over MTU less IP and UDP header to avoid IP fragmentation. E.g. DNS switches to TCP for messages bigger then 512 bytes. So you are probably safe with a buffer of 1472 bytes (1500 Ethernet MTU - 20 for IP header without options - 8 UDP header), unless your network uses jumbo frames. This of course depends on the application protocol on top of UDP.
If you are really paranoid (or working with unknown protocol) you can use MSG_PEEK
and MSG_TRUNC
flags to first figure out the size, and then allocate big enough buffer (see recv(2)
).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With