Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sequential increment skipping numbers

ID | info
---------
1  | xxx
2  | xxx
4  | xxx

I have a setup similar to this in PostgreSQL. (I'm using a mac running Mojave if that makes a difference).

I have a few more columns but the issue is the ID column. This was set to be sequential, so using Express I insert into the DB.

Problem is, it increments the ID on every insert even failing inserts. info is defined unique, so trying to insert an existing info returns an error. But ID is still incremented.

For example, if I insert 2 rows, I have id 1 & id 2. Then another one that fails with a unique violation. The the next successful insert gets id = 4, skipping id = 3.

Can I set up Postgres so it doesn't increment on failed inserts?

like image 270
thatdevop Avatar asked Dec 05 '18 23:12

thatdevop


1 Answers

serial columns, or IDENTITY in Postgres 10 or later, draw numbers from a SEQUENCE and gaps are to be expected. Their job is to make concurrent write access possible with unique numbers - not necessarily gap-less numbers.

If you don't actually have concurrent write access, there are simple ways to achieve (mostly) gap-less numbers. Like:

INSERT INTO tbl (info) 
SELECT 'xxx'
WHERE NOT EXISTS (SELECT FROM tbl WHERE info = 'xxx');

That doesn't burn a serial ID from the SEQUENCE because a duplicate insert is skipped. (The INSERT might still fail for any other reason - and burn a serial number. You could reset the SEQUENCE in such a case:

  • How to reset postgres' primary key sequence when it falls out of sync?

While inserting multiple rows in a single statement, you also have to rule out duplicates within the inserted set. Example code:

  • Return data from subselect used in INSERT in a Common Table Expression

But if you do have concurrent writes, none of the above works reliably, on principle. You better learn to accept gaps in the IDs. You can always have a query with row_number() OVER (ORDER BY id) to generate gap-less numbers after the fact. However, the numbers are still arbitrary to a degree. Smaller numbers were not necessarily committed earlier. There are exceptions under concurrent write load. Related:

  • Primary Key Value Not Incrementing Correctly
  • Serial numbers per group of rows for compound key
  • Auto increment table column

Or consider a UUID instead (dat type uuid) and avoid the inherent problem of duplicates with random values in a huge key space. Not at all serial, though:

  • Generating a UUID in Postgres for Insert statement?
like image 154
Erwin Brandstetter Avatar answered Nov 28 '22 05:11

Erwin Brandstetter