I was simply experimenting in golang. I came across an interesting result. This is my code.
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
var str1, str2 string
wg.Add(2)
go func() {
fmt.Scanf("%s", &str1)
wg.Done()
}()
go func() {
fmt.Scanf("%s", &str2)
wg.Done()
}()
wg.Wait()
fmt.Printf("%s %s\n", str1, str2)
}
I gave the following input.
beat
it
I was expecting the result to be either
it beat
or
beat it
But I got.
eat bit
Can any one please help me figure out why it is so?
fmt.Scanf
isn't an atomic operation. Here's the implementation : http://golang.org/src/pkg/fmt/scan.go#L1115
There's no semaphor, nothing preventing two parallel executions. So what happens is simply that the executions are really parallel, and as there's no buffering, any byte reading is an IO operation and thus a perfect time for the go scheduler to change goroutine.
The problem is that you are sharing a single resource (the stdin byte stream) across multiple goroutines.
Each goroutine could be spawn at different non-deterministic times. i.e:
In most cases is enough to use only one goroutine to access a linear resource as a byte stream and attach a channel to it and then spawn multiple consumers that listen to that channel.
For example:
package main
import (
"fmt"
"io"
"sync"
)
func main() {
var wg sync.WaitGroup
words := make(chan string, 10)
wg.Add(1)
go func() {
for {
var buff string
_, err := fmt.Scanf("%s", &buff)
if err != nil {
if err != io.EOF {
fmt.Println("Error: ", err)
}
break
}
words <- buff
}
close(words)
wg.Done()
}()
// Multiple consumers
for i := 0; i < 5; i += 1 {
go func() {
for word := range words {
fmt.Printf("%s\n", word)
}
}()
}
wg.Wait()
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With