I would like to know if a PostgreSQL multi-row INSERT
is safe (all data is inserted or no data is inserted in case of system/database failure). Example of query:
INSERT INTO "tests" ("name") VALUES ('1'), ('2')
PostgreSQL INSERT Multiple Rows First, specify the name of the table that you want to insert data after the INSERT INTO keywords. Second, list the required columns or all columns of the table in parentheses that follow the table name. Third, supply a comma-separated list of rows after the VALUES keyword.
Single row INSERTs In the same region on AWS with lets say 1ms latency, this number can go up to ~500 INSERTs per second.
The answer for this simple case is: "Yes".
Your own answer (now deleted) is mis-interpreting things. A single INSERT
statement is always atomic. Your example INSERT
with two rows is all or nothing - both rows are inserted or none.
Whether autocommit is enabled or not is up to your client.
In psql autocommit is enabled by default. All that means is that each statement runs as a transaction of its own, unless you wrap (any number of) statements into an explicit transaction wrapper. So this is "all or nothing":
INSERT INTO test (name) VALUES ('1'), ('2');
Just like this:
BEGIN;
INSERT INTO test (name) VALUES ('1');
INSERT INTO test (name) VALUES ('2');
COMMIT;
But not this:
INSERT INTO test (name) VALUES ('1');
INSERT INTO test (name) VALUES ('2');
In pgAdmin everything you send from the query window at once is wrapped into a transaction by default.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With