I'm trying to add a column to a table in my Postgres 9.3 database with this seemingly simple SQL:
ALTER TABLE quizzes ADD COLUMN deleted BOOLEAN NOT NULL DEFAULT false;
However, I'm getting the following error:
ERROR: could not create unique index "quizzes_pkey"
DETAIL: Key (id)=(10557462) is duplicated.
Strangely enough, there are actually no rows with that id (which is the primary key, so it shouldn't have duplicates):
SELECT id FROM quizzes WHERE id = 10557462;
id
----
(0 rows)
In fact, it looks like that id has been skipped somehow:
SELECT id FROM quizzes WHERE id > 10557459 ORDER BY id LIMIT 4;
id
----------
10557460
10557461
10557463
10557464
(4 rows)
Why is this preventing me from adding a column, and how can I fix it?
Two NULL values aren't equal, and do not conflict in a UNIQUE index. In sql server you need a filter on your unique index to allow nulls. Is that not true in postgres? No, postgres accepts multiple null values for unique fields. But that's not the case anyway, i don't have any records with mastercode = null
In sql server you need a filter on your unique index to allow nulls. Is that not true in postgres? No, postgres accepts multiple null values for unique fields. But that's not the case anyway, i don't have any records with mastercode = null
You just want to insert the records which can be inserted without causing a duplicate key violation. But, that was just not possible. Before EDB Postgres Advanced Server version -13, EDB*Loader would abort the load if it hit the duplicate records violating unique indexes like what is shown in below example:
The ignored records due to the unique index violation will be logged into the ‘.bad’ file, so if need may be you can analyze what all records were rejected. Let us just try to load the same “student.csv” as above, but now execute the “edbldr” command with HANDLE_CONFLICTS set to TRUE:
I suspect you have pre-existing index corruption or visibility issues.
When you ALTER TABLE ... ADD COLUMN ... DEFAULT ...
it does a full table rewrite. This rebuilds all indexes, in the process noticing the problem on the heap.
You'll probably find that VACUUM FULL
on the table produces the same error.
I expect that
BEGIN;
SET LOCAL enable_indexscan = off;
SET LOCAL enable_bitmapscan = off;
SET LOCAL enable_indexonlyscan = off;
SELECT ctid,xmin,xmax,id FROM quizzes WHERE id = 10557462;
ROLLBACK;
will reveal that the tuples actually do exist.
Please first read and act on this wiki page. Once you've done that, check your version. Are you running or have you ever run a PostgreSQL 9.3 version older than 9.3.9? Especially as a replica that was then promoted? If so, that likely explains it due to the known multixact bugs that were fixed there:
Otherwise, hard to say what's happening. It'd be necessary to take a look at the problem heap page(s) using pageinspect
, at pg_controldata
output, and possibly at the b-tree pages referring to those heap pages.
I've accepted @Craig Ringer's answer because I never would have been able to resolve the problem without it. In case it helps anyone else, here's the exact query I used to solve the problem (luckily for me, the duplicates can be deleted):
BEGIN;
SET LOCAL enable_indexscan = off;
SET LOCAL enable_bitmapscan = off;
SET LOCAL enable_indexonlyscan = off;
DELETE FROM quizzes WHERE id = 10557462;
COMMIT;
After that, my original query finally succeeded:
ALTER TABLE quizzes ADD COLUMN deleted BOOLEAN NOT NULL DEFAULT false;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With