I would like to insert several rows of data in Postgres DB table in the same query, but value for one of the columns needs to be calculated using a scalar result from a sub-query and a passed bound parameter. The calculation is a concatenation of two Postgres arrays.
I was able to do it with a query like this:
INSERT INTO my_table (col1, col2, computed_col)
VALUES
(
:col1Val1,
:col2val1,
(SELECT some_col FROM some_table WHERE id = :id) || ARRAY[:computed_col1]::bigint[]
),
(
:col1Val2,
:col2val2,
(SELECT some_col FROM some_table WHERE id = :id) || ARRAY[:computed_col2]::bigint[]
);
CTE works as well, but it looks unnecessary due to the fact that we still need SELECT subquery from CTE "table" for each set of values.
As you can see SELECT subquery is the same for each set of data to be inserted. So is it possible somehow to specify single subquery and reuse the result without repeating SELECT sub-queries, or maybe there is some other way(s) to optimize the query above?
And what issues the query can cause from a performance point of view?
You can do use insert . . . select, basically moving the VALUES() into the FROM clause:
INSERT INTO my_table (col1, col2, computed_col)
SELECT v.col1, v.col2, x.some_col || v.computed
FROM (SELECT some_col FROM some_table WHERE id = :id
) x CROSS JOIN
(VALUES (:col1Val1, :col2val1, ARRAY[:computed_col1]::bigint[]),
(:col1Val2, :col2val2, ARRAY[:computed_col2]::bigint[])
) v(col1, col2, computed);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With