I want to account for a possible scenario where clients of my TCP/IP stream socket service send data to my service faster than it manages to move the data to its buffers (I am talking about application buffers, naturally) with recv
and work with it.
So basically, what happens in such scenarios?
Obviously, some sort of service beneath my service which is a user application, has to receive incoming stream and store it somewhere until I issue 'recv', right? Most certainly the operating system.
I don't want to re-open old questions, but I can't seem to find an answer to this seemingly obvious one?
TCP provides flow control . The TCP stack (both on the sender and receiver side) will be able to buffer some data for you, and this is usually done in the OS kernel.
When the receiver buffers fill up, the sender will know about it, and stop sending more data, eventually leading to the sending application blocking(or otherwise not being able to send more data) until space becomes available again.
Shortly described, every TCP packet(segment) sent includes the size of data that can be buffered - the window size. This means the other end at all times know how much data it can send without the receiver throwing it away because the buffers are full. If the window size becomes 0, buffers are full and no more data will be sent (and in case of the sender being blocking, a send()
call will block), Theres procedures for probing whether the tcp window is still 0, so sending can resume again when the data has been consumed.
There's some more details here
It's network driver stack that maintains data buffers (including the ones for incoming data). If the buffer is filled, consequent TCP packets are dropped, and the client is stuck trying to send the data. There's a bit more on this here and here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With