Short version:
Is it possible in Golang to spawn a number of external processes (shell commands) in parallel, such that it does not start one operating system thread per external process ... and still be able to receive its output when it is finished?
Longer version:
In Elixir, if you use ports, you can spawn thousands of external processes without really increasing the number of threads in the Erlang virtual machine.
E.g. the following code snippet, which starts 2500 external sleep
processes, is managed by only 20 operating system threads under the Erlang VM:
defmodule Exmultiproc do
for _ <- 1..2500 do
cmd = "sleep 3600"
IO.puts "Starting another process ..."
Port.open({:spawn, cmd}, [:exit_status, :stderr_to_stdout])
end
System.cmd("sleep", ["3600"])
end
(Provided you set ulimit -n
to a high number, such as 10000)
On the other hand, the following code in Go, which is supposed to do the same thing - starting 2500 external sleep
processes - does also start 2500 operating system threads. So it obviously starts one operating system thread per (blocking?) system call (so as not to block the whole CPU, or similar, if I understand correctly):
package main
import (
"fmt"
"os/exec"
"sync"
)
func main() {
wg := new(sync.WaitGroup)
for i := 0; i < 2500; i++ {
wg.Add(1)
go func(i int) {
fmt.Println("Starting sleep ", i, "...")
cmd := exec.Command("sleep", "3600")
_, err := cmd.Output()
if err != nil {
panic(err)
}
fmt.Println("Finishing sleep ", i, "...")
wg.Done()
}(i)
}
fmt.Println("Waiting for WaitGroup ...")
wg.Wait()
fmt.Println("WaitGroup finished!")
}
Thus, I was wondering if there is a way to write the Go code so that it does the similar thing as the Elixir code, not opening one operating system thread per external process?
I'm basically looking for a way to manage at least a few thousand external long-running (up to 10 days) processes, in a way that causes as little problems as possible with any virtual or physical limits in the operating system.
(Sorry for any mistakes in the codes, as I'm new to Elixir and, and quite new to Go. I'm eager to get to know any mistakes I'm doing.)
EDIT: Clarified about the requirement to run the long-running processes in parallel.
I find that if we not wait
processes, the Go runtime will not start 2500 operating system threads
. so please use cmd.Start() other than cmd.Output().
But seems it is impossible to read the process's stdout
without consuming a OS thread by golang os package. I think it is because os package not use non-block io to read the pipe.
The bottom, following program runs well on my Linux, although it block the process's stdout as @JimB said in comment, maybe it is because we have small output and it fit the system buffers.
func main() {
concurrentProcessCount := 50
wtChan := make(chan *result, concurrentProcessCount)
for i := 0; i < concurrentProcessCount; i++ {
go func(i int) {
fmt.Println("Starting process ", i, "...")
cmd := exec.Command("bash", "-c", "for i in 1 2 3 4 5; do echo to sleep $i seconds;sleep $i;echo done;done;")
outPipe,_ := cmd.StdoutPipe()
err := cmd.Start()
if err != nil {
panic(err)
}
<-time.Tick(time.Second)
fmt.Println("Finishing process ", i, "...")
wtChan <- &result{cmd.Process, outPipe}
}(i)
}
fmt.Println("root:",os.Getpid());
waitDone := 0
forLoop:
for{
select{
case r:=<-wtChan:
r.p.Wait()
waitDone++
output := &bytes.Buffer{}
io.Copy(output, r.b)
fmt.Println(waitDone, output.String())
if waitDone == concurrentProcessCount{
break forLoop
}
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With