I know this is a weird request, but for some hacky reasons I can't avoid, I'd like to be able to consistently sync a few tables from one database to another. I know I could write out the functionality myself in a script, but I figure pg_dump
and pg_restore
will apply a lot of optimizations to the process that I'm not aware of myself.
What I'm wondering is if there's a way to have pg_restore
overwrite the existing tables. Basically, in pseudo-code something like:
-- pseudo code
begin;
drop table to_restore;
drop table to_restore2;
drop table to_restore3;
-- etc
restore table to_restore;
restore table to_restore2;
restore table to_restore3;
-- etc
commit;
I'm also open to alternatives ways of doing this if this isn't so great.
Seems like you want the -c
option specified in the pg_restore documentation
-c
--clean
Clean (drop) database objects before recreating them. (Unless --if-exists is used, this might generate some harmless error messages, if any objects were not present in the destination database.)
which you can use with the -1
flag to do everything in one transaction
-1
--single-transaction
Execute the restore as a single transaction (that is, wrap the emitted commands in BEGIN/COMMIT). This ensures that either all the commands complete successfully, or no changes are applied. This option implies --exit-on-error.
This is only example of possible solution:
copy those tables from first db to csv. and use extremely fast copy in transaction:
begin;
truncate table to_restore;
truncate table to_restore2;
truncate table to_restore3;
set commit_delay to 100000;
set synchronous_commit to off;
copy to_restore from 'to_restore.csv';
copy to_restore2 from 'to_restore2.csv';
copy to_restore3 from 'to_restore3.csv';
commit;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With