How do I batch sql statements with Go's database/sql package?
In Java I would do it like this :
// Create a prepared statement String sql = "INSERT INTO my_table VALUES(?)"; PreparedStatement pstmt = connection.prepareStatement(sql); // Insert 10 rows of data for (int i=0; i<10; i++) { pstmt.setString(1, ""+i); pstmt.addBatch(); } // Execute the batch int [] updateCounts = pstmt.executeBatch();
How would I achieve the same in Go?
A batch of SQL statements is a group of two or more SQL statements or a single SQL statement that has the same effect as a group of two or more SQL statements. In some implementations, the entire batch statement is executed before any results are available.
You can execute database transactions using an sql. Tx, which represents a transaction. In addition to Commit and Rollback methods representing transaction-specific semantics, sql. Tx has all of the methods you use to perform common database operations.
Geographical Operations System, mapping and database software for telecommunications companies.
You can query for multiple rows using Query or QueryContext , which return a Rows representing the query results. Your code iterates over the returned rows using Rows. Next . Each iteration calls Scan to copy column values into variables.
Since the db.Exec
function is variadic, one option (that actually does only make a single network roundtrip) is to construct the statement yourself and explode the arguments and pass them in.
Sample code:
func BulkInsert(unsavedRows []*ExampleRowStruct) error { valueStrings := make([]string, 0, len(unsavedRows)) valueArgs := make([]interface{}, 0, len(unsavedRows) * 3) for _, post := range unsavedRows { valueStrings = append(valueStrings, "(?, ?, ?)") valueArgs = append(valueArgs, post.Column1) valueArgs = append(valueArgs, post.Column2) valueArgs = append(valueArgs, post.Column3) } stmt := fmt.Sprintf("INSERT INTO my_sample_table (column1, column2, column3) VALUES %s", strings.Join(valueStrings, ",")) _, err := db.Exec(stmt, valueArgs...) return err }
In a simple test I ran, this solution is about 4 times faster at inserting 10,000 rows than the Begin, Prepare, Commit presented in the other answer - though the actual improvement will depend a lot on your individual setup, network latencies, etc.
If you’re using PostgreSQL then pq supports bulk imports.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With