package main
import (
"database/sql"
"fmt"
_ "github.com/lib/pq"
"sync"
)
func main() {
db, _ := sql.Open("postgres", fmt.Sprintf("host=%s dbname=%s user=%s sslmode=disable", "localhost", "dbname", "postgres"))
defer db.Close()
db.SetMaxOpenConns(15)
var wg sync.WaitGroup
for i := 0; i < 15; i++ {
wg.Add(1)
go func() {
defer wg.Done()
//#1
rows, _ := db.Query("SELECT * FROM reviews LIMIT 1")
for rows.Next() {
//#2
db.Exec("SELECT * FROM reviews LIMIT 1")
}
}()
}
wg.Wait()
}
Query #1 opens 15 connections and they will be closed when rows.Next()
is executed. But rows.Next()
will be never executed because it contains db.Exec()
that waits for a free connection.
How to solve this problem?
What you have is a deadlock. In the worst case scenario you have 15 goroutines holding 15 database connections, and all of those 15 goroutines require a new connection to continue. But to get a new connection, one would have to advance and release a connection: deadlock.
The linked wikipedia article details prevention of deadlock. For example a code execution should only enter a critical section (that locks resources) when it has all the resources it needs (or will need). In this case this means you would have to reserve 2 connections (exactly 2; if only 1 is available, leave it and wait), and if you have those 2, only then proceed with the queries. But in Go you can't reserve connections in advance. They are allocated as needed when you execute queries.
Generally this pattern should be avoided. You should not write code which first reserves a (finite) resource (db connection in this case), and before it would release it, it demands another one.
An easy workaround is to execute the first query, save its result (e.g. into a Go slice), and when you're done with that, then proceed with the subsequent queries (but also don't forget to close sql.Rows
first). This way your code does not need 2 connections at the same time.
And don't forget to handle errors! I omitted them for brevity, but you should not in your code.
This is how it could look like:
go func() {
defer wg.Done()
rows, _ := db.Query("SELECT * FROM reviews LIMIT 1")
var data []int // Use whatever type describes data you query
for rows.Next() {
var something int
rows.Scan(&something)
data = append(data, something)
}
rows.Close()
for _, v := range data {
// You may use v as a query parameter if needed
db.Exec("SELECT * FROM reviews LIMIT 1")
}
}()
Note that rows.Close()
should be executed as a defer
statement to make sure it will get executed (even in case of a panic). But if you simply use defer rows.Close()
, that would only be executed after the subsequent queries are executed, so it won't prevent the deadlock. So I would refactor it to call it in another function (which may be an anonymous function) in which you can use a defer
:
rows, _ := db.Query("SELECT * FROM reviews LIMIT 1")
var data []int // Use whatever type describes data you query
func() {
defer rows.Close()
for rows.Next() {
var something int
rows.Scan(&something)
data = append(data, something)
}
}()
Also note that in the second for
loop a prepared statement (sql.Stmt
) acquired by DB.Prepare()
would probably be a much better choice to execute the same (parameterized) query multiple times.
Another option is to launch subsequent queries in new goroutines so that the query executed in that can happen when the currently locked connection is released (or any other connection locked by any other goroutine), but then without explicit synchronization you don't have control when they get executed. It could look like this:
go func() {
defer wg.Done()
rows, _ := db.Query("SELECT * FROM reviews LIMIT 1")
defer rows.Close()
for rows.Next() {
var something int
rows.Scan(&something)
// Pass something if needed
go db.Exec("SELECT * FROM reviews LIMIT 1")
}
}()
To make your program wait for these goroutines too, use the WaitGroup
you already have in action:
// Pass something if needed
wg.Add(1)
go func() {
defer wg.Done()
db.Exec("SELECT * FROM reviews LIMIT 1")
}()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With