Suppose I have a database with a table which contains 200k+ rows.
This table has a fixed tuple with id 1800. The rest of the tuples sequence starts at 300k+.
I have a need to clean this table, delete all records without delete the one register with id 1800. I came up with 3 types of query i could possibly run:
DELETE FROM table WHERE id > 1800
DELETE FROM table WHERE id <> 1800
DELETE FROM table WHERE id NOT IN (1800)
I have a feeling that the first one is quicker than the others, but I am not sure, as all of the other data have ids way greater than 1800.
Which one of them is quicker, and why? Also, if there is a quicker way to delete the records excluding the one that cannot be deleted, let me know.
The quickest way in most databases would be:
Admittedly, this may not be possible due to triggers, constraints, and permissions. In many databases you can do something similar by modifying (2) to truncate the table instead of dropping it.
As for your original question, the overhead with actually deleting the rows and the data associated with them is going to dominate the query. How you do the comparison is irrelevant.
Sample code
create temp table saved as
select * from t where id = 1800
truncate table t
insert into t
select * from saved
I'm not sure about Postgres naming conventions for temporary tables, but this is the idea.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With