I've to duplicate values from one table to another (identical table schemes). What is better (performance):
Update: I've made a small test on table with almost 3k rows. Drop and create gives about 60ms vs Delete and insert - about 30ms.
Description. TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation.
Performance-wise the DROP command is quick to perform but slower than TRUNCATE because it gives rise to complications. Unlike DELETE we can't rollback the data after using the DROP command. In the DROP command, table space is freed from memory because it permanently delete table as well as all its contents.
The PostgreSQL INSERT statement allows you to insert a new row into a table. In this syntax: First, specify the name of the table ( table_name ) that you want to insert data after the INSERT INTO keywords and a list of comma-separated columns ( colum1, column2, .... ).
As a rule of thumb I would say a update is usually faster then an delete. As it had been stated an update has to do less work then a delete. But it depends largly how the update is done and how the table structure for the delete is. Here are some steps that the database will do when executing each command.
I see four useful ways to replace the contents of the table. None of them is "obviously right", but it depends on your requirements.
(In a single transaction) DELETE FROM foo; INSERT INTO foo SELECT ...
Pro: Best concurrency: doesn't lock out other transactions accessing the table, as it leverages Postgres's MVCC.
Con: Probably the slowest if you measure the insert-speed alone. Causes autovacuum to clean up dead rows, thus creating a higher I/O load.
TRUNCATE foo; INSERT INTO foo SELECT ...
Pro: Fastest for smaller tables. Causes less write I/O than #1
Con: Excludes all other readers -- other transactions reading from the table will have to wait.
TRUNCATE foo
, DROP all indexes on table, INSERT INTO foo SELECT ...
, re-create all indexes.
Pro: Fastest for large tables, because creating indexes with CREATE INDEX
is faster than updating them incrementally.
Con: Same as #2
The switcheroo. Create two identical tables foo
and foo_tmp
TRUNCATE foo_tmp;
INSERT INTO foo_tmp SELECT ...;
ALTER TABLE foo RENAME TO foo_tmp1;
ALTER TABLE foo_tmp RENAME TO foo;
ALTER TABLE foo_tmp1 RENAME TO foo_tmp;
Thanks to PostgreSQL's transactional DDL capabilities, if this is done in a transaction, the rename is performed without other transactions noticing. You can also combine this with #3 and drop/create indexes.
Pro: Less I/O performed, like #2, and without locking out other readers (locks taken only during the rename part).
Con: The most complicated. Also you cannot have foreign keys or views pointing to the table, as they would point to the wrong table after renaming it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With