Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Suppress "duplicate key value violates unique constraint" errors

I'm developing a Rails 3 app that uses Postgres as its database. I've got the table shown below:

             Table "public.test"
    Column     |  Type   | Modifiers
---------------+---------+-----------
 id            | integer | not null
 some_other_id | integer |
Indexes:
    "test_pkey" PRIMARY KEY, btree (id)
    "some_other_id_key" UNIQUE CONSTRAINT, btree (some_other_id)

This has two columns:

  • id, which is the primary key (automatically created by rails)
  • some_other_id, which contains keys generated by another system. This id needs to be unique, so I've added a unique key constraint to the table.

Now if I try to insert a row with a duplicate some_other_id, it fails (good) and I get the following output in my Postgres logs:

ERROR:  duplicate key value violates unique constraint "some_other_id_key"

The problem is that it's completely mainline for my app to try and add the same ID twice, and my logs are being spammed with this "ERROR" message, which causes various problems: files take a lot of disk space, diagnostics get lost in the noise, Postgres has to throw away diags to keep the log files within size limits, etc.

Does anyone know how I can either:

  • Suppress the log, either by suppressing all logs about this key, or perhaps by specifying something on the transaction that tries to do the INSERT.
  • Use some other Postgres feature to spot the duplicate key and not try the INSERT. I've heard of rules and triggers but I can't get either to work (though I'm no Postgres expert).

Note that any solution needs to work with Rails, which does its inserts like this:

INSERT INTO test (some_other_id) VALUES (123) RETURNING id;
like image 529
Alex Hockey Avatar asked Sep 12 '12 10:09

Alex Hockey


People also ask

Does unique constraint allow duplicate values?

A UNIQUE constraint dictates that a column must not store duplicate values.


2 Answers

You can disable logging of error messages for the session (or globally actually) but it requires superuser privileges:

By running:

set log_min_messages=fatal;

only fatal errors are logged until the session (=connection) is ended or you issue a new set statement to reset the value.

But as only a superuser is allowed to change this, it is probably not a good solution as it would required your application user to have that privilege which is a major security problem.

like image 136
a_horse_with_no_name Avatar answered Oct 26 '22 21:10

a_horse_with_no_name


To avoid the duplicate key error to begin with:

INSERT INTO test (some_other_id)
SELECT 123
WHERE  NOT EXISTS (SELECT 1 FROM test WHERE some_other_id = 123)
RETURNING id;

I am assuming id is a serial column that gets its value automatically.

This is subject to a very tiny race condition (in the time slot between the SELECT and the INSERT). But the worst that can happen is that you get a duplicate key error after all and this will hardly ever occur and shouldn't be a problem in your case.

You can always use raw SQL if your framework restricts your options to use proper syntax.

Or you can create a UDF (user defined function) for the purpose:

CREATE FUNCTION f_my_insert(int)
 RETURNS int LANGUAGE SQL AS
$func$
INSERT INTO test (some_other_id)
SELECT $1
WHERE  NOT EXISTS (SELECT 1 FROM test WHERE some_other_id = $1)
RETURNING id;
$func$

Call:

SELECT f_my_insert(123);

Or, to default to an already existing id:

CREATE FUNCTION f_my_insert(int)
 RETURNS int LANGUAGE plpgsql AS
$func$
BEGIN;

RETURN QUERY
SELECT id FROM test WHERE some_other_id = $1;

IF NOT FOUND THEN
   INSERT INTO test (some_other_id)
   VALUES ($1)
   RETURNING id;
END IF;

END
$func$

Again, that leaves a minimal chance for a race condition. You can eliminate that at the cost of slower performance:

  • Is SELECT or INSERT in a function prone to race conditions?
like image 42
Erwin Brandstetter Avatar answered Oct 26 '22 21:10

Erwin Brandstetter