I would like to know, how flow control works in a client-streaming gRPC service in Go.
Specifically, I am interested in knowing when will a call to stream.SendMsg()
function in the client-side block? According to the documentation :
SendMsg() blocks until :
- There is sufficient flow control to schedule m with the transport, or ...
So what is the specification for the flow control mechanism of the stream? For example, if the server-side code responsible for reading the messages from the stream, isn't reading the messages fast enough, at what point will calls to SendMsg() block?
Is there some kind of backpressure mechanism implemented for the server to tell the client that it is not ready to receive more data? In the meantime, where are all the messages that have been successfully sent before the backpressure signal, queued?
gRPC flow control is based on http2 flow control: https://httpwg.org/specs/rfc7540.html#FlowControl
There will be backpressure. Messages are only successfully sent when there's enough flow control window for them, otherwise SendMsg() will block.
The signal from the receiving side is not to add backpressure, it's to release backpressure. It's like saying "now I'm ready to receive another 1MB of messages, send them".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With