We are using .Net and sockets. The server is using the Socket.Sender(bytes[])
method so it just sends the entire payload. On the other side we are clients consuming the data. Socket.Receive(buffer[])
. In all the examples from Microsoft (and others) they seem to stick with a buffer size of 8192. We have used this size but every now and then we are sending data down to the clients that exceeds this buffer size.
Is there a way of determining how much data the server's sent method sent us? What is the best buffer size?
The maximum send buffer size is 1,048,576 bytes. The default value of the SO_SNDBUF option is 32767. For a TCP socket, the maximum length that you can specify is 1 GB.
All that said, there's no “industry standard” buffer size and sample rate, as it's all dependent on your computer's processing power. However, recording at 128 to 256 at a sample rate of 48kHz is acceptable for most home recording on modern-day computers.
You can try ioctl . FIONREAD tells you how many bytes are immediately readable. If this is the same as the buffer size (which you might be able to retrieve and/or set with another icotl call), then the buffer is full.
As the receive buffer becomes full, new data cannot be accepted from the network for this socket and must be dropped, which indicates a congestion event to the transmitting node.
Even if you're sending more data than that, it may well not be available in one call to Receive.
You can't determine how much data the server has sent - it's a stream of data, and you're just reading chunks at a time. You may read part of what the server sent in one Send call, or you may read the data from two Send calls in one Receive call. 8K is a reasonable buffer size - not so big that you'll waste a lot of memory, and not so small that you'll have to use loads of wasted Receive calls. 4K or 16K would quite possibly be fine too... I personally wouldn't start going above 16K for network buffers - I suspect you'd rarely fill them.
You could experiment by trying to use a very large buffer and log how many bytes were received in each call - that would give you some idea of how much is generally available - but it wouldn't really show the effect of using a smaller buffer. What concerns do you have over using an 8K buffer? If it's performance, do you have any evidence that this aspect of your code is a performance bottleneck?
Jon Skeet's answer unfortunately leaves a big part of the picture out - the send buffer size, and the bandwidth-delay product of the pipe you're writing to.
If you are trying to send data over a large pipe using a single socket, and you want TCP to fill that pipe, you need to use a send buffer size that is equivalent to the bandwidth-delay product of the pipe. Otherwise, TCP will not fill the pipe because it will not leave enough 'bytes in flight' at all times.
TCP handles packet loss for you, which means that it has to have buffers to hold onto the data you give it until it can confirm that data has been received correctly by the other side (by a TCP ACK). No buffer is infinite, therefore there has to be a limit somewhere. That limit is arbitrary, you get to choose whatever you want, but you need to make sure it is large enough to handle the connection's BDP.
Consider a TCP socket that has a buffer size of exactly: 1 byte. And you're trying to send data over a connection that has a bitrate of 1 gbit/sec and a one-way latency of 1 ms.
How fast is this connection getting data across? It takes 2 milliseconds to send 1 byte, therefore, this connection is getting a bitrate of 500 bytes/sec == 4 kbit/sec.
Yikes.
Consider a connection that has a speed of 1 gigabit, and has a one-way latency of 10 milliseconds, on average. The round-trip-time (aka, the amount of time that elapses between your socket sending a packet and the time it receives the ack for that packet and thus knows to send more data) is usually twice the latency.
So if you have a 1 gigabit connection, and a RTT of 20 milliseconds, then that pipe has 1 gigabit/sec * 20 milliseconds == 2.5 megabytes of data in flight at all time if it's being utilized completely.
If your TCP send buffer is anything less than 2.5 megabytes, then that one socket will never fully utilize the pipe - you'll never get a gigabit/sec of performance out of your socket.
If your application uses many sockets, then the aggregate size of all TCP send buffers must be 2.5 MB in order to fully utilize this hypothetical 1 gigabit/20 ms RTT pipe. For instance, if you use 8192-byte buffers, you need 306 simultaneous TCP sockets to fill that pipe.
Edit for questions:
Calculating BDP is just multiplying the Bandwidth times the Round-trip Delay and paying attention to units.
So if you have a 1 gigabit/sec connection, and a round-trip time of 20 msecs, then what happens is you're multiplying Bits/Sec * Seconds, so the seconds cancel out and you're left with Bits. Convert to Bytes and you have your buffer size.
And thus, our TCP buffer needs to be set to 2.5 MB to saturate this made-up pipe.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With