What are the uses cases for buffered channels ? If i want multiple parallel actions i could just use the default, synchronous channel eq.
package main import "fmt" import "time" func longLastingProcess(c chan string) { time.Sleep(2000 * time.Millisecond) c <- "tadaa" } func main() { c := make(chan string) go longLastingProcess(c) go longLastingProcess(c) go longLastingProcess(c) fmt.Println(<- c) }
What would be the practical cases for increasing the buffer size ?
Buffered channels allows to accept a limited number of values without a corresponding receiver for those values. It is possible to create a channel with a buffe. Buffered channel are blocked only when the buffer is full. Similarly receiving from a buffered channel are blocked only when the buffer will be empty.
When a channel is created with no capacity, it is called an unbuffered channel. In turn, a channel created with capacity is called a buffered channel. To understand what the synchronization behavior will be for any goroutine interacting with a channel, we need to know the type and state of the channel.
Sends to a buffered channel block only when the buffer is full. Receives block when the buffer is empty.
Golang provides buffered channels, which allow you to specify a fixed length of buffer capacity so one can send that number of data values at once. These channels are only blocked when the buffer is full. Likewise, the channel on the receiving end will only block when the buffer is empty.
Generally, buffering in channels is beneficial for performance reasons.
If a program is designed using an event-flow or data-flow approach, channels provide the means for the events to pass between one process and another (I use the term process in the same sense as in Tony Hoare's Communicating Sequential Processes (CSP), ie. effectively synonymous with the goroutine).
There are times when a program needs its components to remain in lock-step synchrony. In this case, unbuffered channels are required.
Otherwise, it is typically beneficial to add buffering to the channels. This should be seen as an optimisation step (deadlock may still be possible if not designed out).
There are novel throttle structures made possible by using channels with small buffers (example).
There are special overwriting or lossy forms of channels used in occam and jcsp for fixing the special case of a cycle (or loop) of processes that would otherwise probably deadlock. This is also possible in Go by writing an overwriting goroutine buffer (example).
You should never add buffering merely to fix a deadlock. If your program deadlocks, it's far easier to fix by starting with zero buffering and think through the dependencies. Then add buffering when you know it won't deadlock.
You can construct goroutines compositionally - that is, a goroutine may itself contain goroutines. This is a feature of CSP and benefits scalability greatly. The internal channels between a group of goroutines are not of interest when designing the external use of the group as a self-contained component. This principle can be applied repeatedly at increasingly-larger scales.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With