I'm importing some rows to my postgres database like so:
psql -U postgres import_test < 1432798324_data
Where my import_test
is my database and 1432798324_data
file is just plain text formatted like:
COPY cars FROM stdin;
<row data>
<row data>
...
\.
COPY drivers FROM stdin;
<row data>
<row data>
...
\.
(I got the format for this plain text file from the answer here).
This method works fine when I'm importing into a blank database. However, if the database isn't blank and during the import any duplicate rows are found I get an error:
ERROR: duplicate key value violates unique constraint "car_pkey"
Is there any way I could modify my import command to force an overwrite if duplicates are found? In other words, if I'm importing a row and there's already a row with that id, I want my new row to overwrite it.
You can import into a temporary table. Then you can delete rows that were already there before you copy over the new data:
create temporary table import_drivers as select * from drivers limit 0;
copy import_drivers from stdin;
begin transaction;
delete from drivers
where id in
(
select id
from import_drivers
);
insert into drivers
select *
from import_drivers;
commit transaction;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With