I thought I'd found an easy way to return an http response immediately then do some work in the background without blocking. However, this doesn't work.
func MyHandler(w http.ResponseWriter, r *http.Request) {
//handle form values
go doSomeBackgroundWork() // this will take 2 or 3 seconds
w.WriteHeader(http.StatusOK)
}
It works the first time--the response is returned immediately and the background work starts. However, any further requests hang until the background goroutine completes. Is there a better way to do this, that doesn't involve setting up a message queue and a separate background process.
To make a function run in the background, insert the keyword `go` before the call (like you do with `defer`). Now, the `doSomething` function will run in the background in a goroutine.
By default, every Go standalone application creates one goroutine. This is known as the main goroutine that the main function operates on. In the above case, the main goroutine spawns another goroutine of printHello function, let's call it printHello goroutine.
If your computer has four processor cores and your program has four goroutines, all four goroutines can run simultaneously. When multiple streams of code are running at the same time on different cores like this, it's called running in parallel.
To make a goroutine stoppable, let it listen for a stop signal on a dedicated quit channel, and check this channel at suitable points in your goroutine. Here is a more complete example, where we use a single channel for both data and signalling.
I know this question was posted 4 years ago, but I hope someone can find this useful.
Here is a way to do that
There are something called worker pool https://gobyexample.com/worker-pools Using go routines and channels
But in the following code I adapt it to a handler. (Consider for simplicity I'm ignoring the errors and I'm using jobs as global variable)
package main
import (
"fmt"
"net/http"
"time"
)
var jobs chan int
func worker(jobs <-chan int) {
fmt.Println("Register the worker")
for i := range jobs {
fmt.Println("worker processing job", i)
time.Sleep(time.Second * 5)
}
}
func handler(w http.ResponseWriter, r *http.Request) {
jobs <- 1
fmt.Fprintln(w, "hello world")
}
func main() {
jobs = make(chan int, 100)
go worker(jobs)
http.HandleFunc("/request", handler)
http.ListenAndServe(":9090", nil)
}
The explanation:
main()
worker()
handler()
the cool thing is that you can send as many requests you want and because this scenario only contains 1 worker. the next request will wait until the previous one finished.
This is awesome Go!
Go multiplexes goroutines onto the available threads which is determined by the GOMAXPROCS
environment setting. As a result if this is set to 1 then a single goroutine can hog the single thread Go has available to it until it yields control back to the Go runtime. More than likely doSomeBackgroundWork
is hogging all the time on a single thread which is preventing the http handler from getting scheduled.
There are a number of ways to fix this.
First, as a general rule when using goroutines, you should set GOMAXPROCS
to the number of CPUs your system has or to whichever is bigger.
Second, you can yield control in a goroutine by doing any of the following:
runtime.Gosched()
ch <- foo
foo := <-ch
select { ... }
mutex.Lock()
mutex.Unlock()
All of these will yield back to the Go runtime scheduler giving other goroutines a chance to work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With